Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "NetApp Unified Storage Appliance report in Aperi - design document"

(Create a new Unified Storage Appliance Report - initial requirements)
(Glossary)
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Create a new Unified Storage Appliance Report - initial requirements==
+
==Create a new Unified Storage Appliance Report - Requirements==
  
NetApp devices represent themselves in three different ways (depending on customer usage).
+
NetApp devices represent themselves in three different ways (depending on customer usage).  
Combined, these are referred to as a Unified Storage Appliance. The three representations include:
+
Combined, these are referred to as a Unified Storage Appliance. The three representations include:
 
* Traditional NAS filer (appliance storage is presented to clients as mounted file shares)
 
* Traditional NAS filer (appliance storage is presented to clients as mounted file shares)
 
* FC block storage array (appliance provides FC attached blocks/volumes and looks just like a storage subsystem from this view)
 
* FC block storage array (appliance provides FC attached blocks/volumes and looks just like a storage subsystem from this view)
* iSCSI block storage array (appliance provides iSCSI attached target volumes)
+
* iSCSI block storage array (appliance provides iSCSI attached target volumes)  
  
However, Aperi's data server looks at NetApp appliances only as NAS filers, ignoring its other two personalities. 
+
To support a Unified Storage Appliance report, Aperi must gather information for all three representations.
To support a Unified Storage Appliance report, Aperi must gather information for all three representations.
+
  
The Aperi data component gathers information on NetApp from 2 sources  
+
The Aperi data component gathers information on NetApp from three sources:
 
* Aperi data agent which sees the mounted file systems or Shares via NFS or CIFS views from a client perspective
 
* Aperi data agent which sees the mounted file systems or Shares via NFS or CIFS views from a client perspective
* NetApp SNMP MIB from which we get some basic, high level asset and simple capacity information on the whole NetApp box.
+
* NetApp SNMP MIB from which we get some basic, high level asset and simple capacity information on the whole NetApp box.  
There is a chance that this technique may produce inaccurate results.  
+
* NetApp SMI-S agent
 +
There is a chance that this technique may produce inaccurate results.
  
The Unified Storage View needs to show the following type of capacity information (exact information and metrics will be determined DIRECTLY from what NetApp provides in their new vendor specific profile for this):
+
The Unified Storage View needs to show the following type of capacity information:
# Show Appliance capacities - raw, used, reserved storage  
+
#Show Appliance capacities - raw, used, reserved storage
# Label logical volumes as NAS, FC, iSCSI, or unassigned
+
#Total storage capacity used for NAS, iSCSI, FCP Blocks, snapshot, snapshot reserve, free, and unallocated.
# Total storage capacity used for NAS, iSCSI, FCP Blocks, snapshot, snapshot reserve, free, and unallocated
+
#Asset Information will already be available through information gathered in NAS Self-Contained Profile
# Asset Information will already be available through information gathered in NAS Self-Contained Profile
+
#RAID configuration for aggregates
# Configuration information on NetApp such as RAID and Cache configurations
+
  
Note that all of the above information will be available directly through SMI-S extension(s) from NetApp.
+
Note that all of the above information will be available directly through SMI-S extension(s) from NetApp. We will use our existing SMI-S support structure as a base and gather information from the new NetApp vendor specific profile. The intent is to create a new Unified Storage Appliance Report that simply shows any and all information that NetApp gives in the vendor specific profile.  
We will use our existing SMI-S support structure as a base and gather information from the new NetApp vendor specific profile(s).
+
We may wish to augment the information provided in the new profile with data we gather from  the Array Profile and the SMI-S Self-Contained NAS Profile.
The intent is to create a new Unified Storage Appliance Report that simply shows any and all information that NetApp gives to us in the vendor specific profile.
+
We will also request from NetApp, field titles and descriptions that we will directly map to the report.
+
So, basically, we take whatever NetApp hands us in the new SMIS profile and slap it into a report.  The new vendor specific profile is being created specifically for this report.
+
  
We may wish to augment the information provided in the new profile with data we gather from other profiles such as the Block Services Profile, the SMI-S Self-Contained NAS profile, etc.
+
==Report Description==
  
==Requirements narrowed down==
+
There will be a series of drill down reports structured on four levels.
Below are listed the items which, even though  mentioned in the requirements section, will not be put in this report:
+
:"Capacity - reserved storage", "Cache configuration" - These are not clearly defined in the requirements and it is difficult to give meaning to them only from their names
+
:"Snapshot", "Snapshot Reserve" - NetApp SMI-S agent does not offer information about snapshots.
+
:"Total storage capacity used for NAS" - NetApp SMI-S agent does not fill in the FileSize property of the CIM_LogicalFile so the sizes of the exported directories and files (i.e. storage capacity used for NAS) cannot be computed.
+
  
Everything else will be included in this report.
+
#The top level report displays general information about a NetApp storage subsystem. A row in this report has the following columns:
 
+
##two drill down icons for accessing "Volume Details" and "Aggregate Details" reports
This document is created based on NetApp SMI-S Agent nasmis1.1X27 (the latest version as I write this)  
+
##"[[#filer|Filer]] Name"
 +
##"Raw Storage" - total disk capacities
 +
##"Assigned Storage" - storage capacity assigned to [[#aggr|aggregates]]. This is calculated as sum of TotalManagedSpace for all the concrete pools of a filer. 
 +
##"Free Space on Aggregates" - space that has not been consumed by the existing [[#flexv|FlexVols]] and can be used to create new [[#flexv|FlexVols]]. This is the sum of RemainingManagedSpace for all the concrete pools of a filer. A  [[#tradv|traditional volume]] consumes all the space in its [[#aggr|aggregate]].
 +
##"Spare Space" - total space on spare disks
 +
##"Spare Disks" - total number of spare disks
 +
##"Files Space" - total space occupied by files
 +
##"LUNs Space"  - total space occupied by LUNs
 +
##"Snapshot Area Space" - total space occupied by snapshots
 +
##"Free Space on Volumes" - total free space on all [[#volume|volumes]]
 +
##"RAID Overhead" - space reserved for RAID overhead 
 +
#Second level reports
 +
##Volume Details. Each row corresponds to a [[#volume|volume]] and contains the following columns:
 +
###one drill down icon for accessing "Volume Content: LUNs and Shares" report
 +
###"Filer Name"
 +
###"Volume Mount Point"
 +
###"Volume Type" - flexible or traditional.Will be determined based on the StorageExtent type of the [[#volume|volume]].
 +
###"Total Volume Space" - Collected and displayed here from CIM_LocalFileSystem.FileSystemSize
 +
###"Free Space" - free space on the [[#volume|volume]] Collected from CIM_LocalFileSystem.AvailableSpace
 +
###"Number of LUNs"  - number of LUNs created on this [[#volume|volume]]
 +
###"Number of Share" - total number of [[#fileshare|File Shares]]
 +
###"Files Space" - space occupied by  files in this [[#volume|volume]]
 +
###"LUNs Space"  - space occupied by LUNs in this [[#volume|volume]]
 +
###"Snapshot Area Space" - space occupied by snapshots in this [[#volume|volume]]
 +
###"Aggregate the Volume Resides On"
 +
##Aggregate Details. The columns of this report are:
 +
###"Filer Name"
 +
###"Aggregate Name",
 +
###"Aggregate Raid Level"
 +
###"Aggregate Total Space" - is collected from CIM_StoragePool.TotalManagedSpace, where CIM_StoragePool is a concrete pool representing this [[#aggr|aggregate]].
 +
###"Aggregate Free Space" - if this [[#aggr|aggregate]] contains [[#flexv|FlexVols]] this value is the space that can be used for the creation of new volumes. If it contains a traditional [[#volume|volume]], this column is 0.  Collected from CIM_StoragePool.RemainingManagedSpace
 +
###"Number of disks in the Aggregate"
 +
###"Files Space" - space occupied by the files in the [[#volume|volumes]] that resides on the [[#aggr|aggregate]]
 +
###"LUNs Space"  - space occupied by LUNs in the [[#volume|volumes]] that resides on the [[#aggr|aggregate]]
 +
###"Snapshot Area Space" - space occupied by snapshots in the [[#volume|volumes]] that resides on the [[#aggr|aggregate]]
 +
###"Free Space on Volumes" - total free space in the [[#volume|volumes]] of the [[#aggr|aggregate]]
 +
#"Volume Content: LUNs and Shares". Describes the [[#fileshare|file shares]] and LUNs that are part of the [[#volume|volume]]. The columns of this report are:
 +
##drill down icon that will be activated only if the row represents a LUN that is mapped to hosts. This icon will drill down to the "LUN Mapping Details"
 +
##"Filer Name"
 +
##"Volume Mount Point"
 +
##"LUN/Share Name" - LUN name, which is an absolute file name or the export name ( the name with which a file/directory is exported/shared )
 +
##"Protocol" - CIFS or/and NFS for shares and iSCSI or/and FC for LUNs. A file can be shared using both CIFS and NFS and a LUN can be mapped both through FC and iSCSI.
 +
##"Allocated Space" - for exported files and directories this will be "N/A", for LUNs will represent the LUN's capacity
 +
#"LUN Mapping Details". Displays the FC initiator ports or iSCSI initiator nodes that have access to this LUN.
 +
##"Filer Name"
 +
##"Volume Mount Point"
 +
##"LUN Name"
 +
##"Protocol"
 +
##"Host Node/Port name"
  
==SMI-S agent data collection==
+
The report is located in Data Manager -> Reporting -> Asset -> System-wide -> NetApp Unified Storage Report.
Most of the data needed by this report is collected by the implementation of "Array" and "NAS Self Contained" profiles.  
+
The report will contain data only after probing the NetApp filers using the SMI-S agent. That is by creating probes for the filers that appear under Storage Subsystems in Aperi Storage Manager -> Monitoring -> Probes -> Create Probe.
  
===LUNs masking and mapping===  
+
==SMI-S agent data collection (Device Server)==
 +
Part of the data needed by this report is collected by the implementation of "Array" and "NAS Self Contained" profiles.
 +
New steps will be added to collect data from NetApp vendor specific sub-profile.
 +
 
 +
===LUNs masking and mapping===
 
Paths between iSCSI/FC initiators, LUNs and target ports are collected as part of LUN Masking and Mapping subprofile.
 
Paths between iSCSI/FC initiators, LUNs and target ports are collected as part of LUN Masking and Mapping subprofile.
  
Line 49: Line 93:
  
 
The main classes that implement LUNs Masking and Mapping subprofile are:
 
The main classes that implement LUNs Masking and Mapping subprofile are:
:org.eclipse.aperi.disk.collection.ProbeGenericMaskingMappingProcess - defines and executes the steps necesserary for collecting the instances in LUN Masking and Mapping profile
+
:org.eclipse.aperi.disk.collection.ProbeGenericMaskingMappingProcess - defines and executes the steps necessary for collecting the instances in LUN Masking and Mapping profile  
:org.eclipse.aperi.disk.collection.MappingMaskingProcessor - collects data from the steps exectuted be ProbeGenericMaskingMappingProcess and saves it in the repository by calling appropriate mappers
+
:org.eclipse.aperi.disk.collection.MappingMaskingProcessor - collects data from the steps executed be ProbeGenericMaskingMappingProcess and saves it in the repository by calling appropriate mappers  
+
The collected data is put in T_RES_CLIENT_SETTING, T_RES_MASKING_INFO, T_RES_PORT  and T_RES_DATA_PATH.
+
The iSCSI node names are stored in T_RES_PORT. Side efects must be analyzed. T_RES_Port.FormatName is always 1. Should be changed to reflect the different types of names stored in this table ( FC WWN, iSCSI node names)
+
  
===Corellation between Array Profile entities (storage extents, Pools, LUNs) and Self Contained NAS entities (filesystems)===
+
The collected data is put in T_RES_CLIENT_SETTING, T_RES_MASKING_INFO, T_RES_PORT and T_RES_DATA_PATH. The iSCSI node names are stored in T_RES_PORT. Side effects must be analyzed. T_RES_Port.FormatName is always 1. Should be changed to reflect the different types of names stored in this table ( FC WWN, iSCSI node names)
  
The [[#aggr|aggregate]] a [[#volume|volume]] resides on, will be found using storage extents, skipping over the logical disk.
+
===Correlation between Array Profile entities (storage extents, Pools, LUNs) and Self Contained NAS entities (filesystems)===
  
A full path from a Filesystem (i.e [[#volume|volume]] which can be traditional or [[#flexv|FlexVol]]) to its containing storage pool ( [[#aggr|aggregate]] ) is represented in folowing ways:  
+
The [[#aggr|aggregate]] a [[#volume|volume]] resides on, will be found using storage extents, skipping over the logical disk (see schema below).
 +
 
 +
A full path from a Filesystem (i.e [[#volume|volume]] which can be traditional or [[#flexv|FlexVol]]) to its containing storage pool ( [[#aggr|aggregate]] ) is represented in SMI-S specification in the following way:
  
:SMI-S specification
 
 
<code>
 
<code>
 
::CIM_FileSystem ------- CIM_ResidesOnExtent ------- CIM_LogicalDisk   
 
::CIM_FileSystem ------- CIM_ResidesOnExtent ------- CIM_LogicalDisk   
Line 68: Line 110:
 
</code>
 
</code>
  
:NetApp SMI-S Agent representation of a [[#flexv|FlexVol]]
+
The path between CIM_FileSystem and CIM_StorageExtent can be shortcuted, in this particular case using the condition: CIM_FileSystem.Root == '/vol/'+ CIM_StorageExtent.Name .
<code>
+
::ONTAP_LocalFS ---------- ONTAP_LocalFSResidesOnExtent ------ ONTAP_LogicalDisk
+
::ONTAP_LogicalDisk ------ ONTAP_FlexVolBasedOnExtent -------- ONTAP_FlexVolExtent 
+
::ONTAP_FlexVolExtent ---- ONTAP_ConcreteFlexVolComponent ---- ONTAP_ConcretePool
+
</code>
+
  
:NetApp SMI-S Agent representation of a [[#tradv|traditional volume]]
+
CIM_StorageExtent(s) and their associations to CIM_StoragePool(s) are collected by the Array Profile implementation and stored in T_RES_STORAGE_EXTENT and T_RES_STORAGE_POOL.
<code>
+
::ONTAP_LocalFS ----------- ONTAP_LocalFSResidesOnExtent ---- ONTAP_LogicalDisk
+
::ONTAP_LogicalDisk ------- ONTAP_BasedOn ------------------- ONTAP_ConcreteExtent 
+
::ONTAP_ConcreteExtent ---- ONTAP_ConcreteComponent --------- ONTAP_ConcretePool
+
</code>
+
  
The path between ONTAP_LocalFS and ONTAP_FlexVolExtent/ONTAP_ConcreteExtent can be shortcuted  using the condition: ONTAP_LocalFS.Root == '/vol/'+ ONTAP_FlexVolExtent.Name for FlexVols and ONTAP_LocalFS.Root == '/vol/'+ ONTAP_ConcreteExtent.Name for traditional volumes.
+
Instances of CIM_FileSystem will be collected by the NAS Self-Contained Profile implementation.
  
ONTAP_FlexVolExtent/ONTAP_ConcreteExtent and their associations to ONTAP_Concrete Pools are collected by the Array Profile implementation  and stored in T_RES_STORAGE_EXTENT and T_RES_STORAGE_POOL.  
+
The storage extents containing [[#volume|volumes]] are collected in the step named "collectExtentsFromPool", which is a sub step of "collectPoolsFromComputerSystem" (see org.eclipse.aperi.disk.collection.ProbeNetAppSubsystemProcess). The association used is CIM_StoragePool (starting point):CIM_ConcreteComponent (association):CIM_StorageExtent (final point).
  
The storage extents containing volumes are collected in the step named "collectExtentsFromPool", which is a substep of "collectPoolsFromComputerSystem" (see org.eclipse.aperi.disk.collection.ProbeNetAppSubsystemProcess). The association used is ONTAP_ConcretePool (starting point):CIM_ConcreteComponent (association):CIM_StorageExtent (final point).  
+
The LUNs and the [[#volume|volumes]] they are part of will be correlated using the LUNs' names, [[#volume|volumes]] mount point and the serial number of their storage subsystem. The LUN names are in fact absolute file names of which prefix is the mount point of the [[#volume|volume]]. For example the LUN named /vol/vol0/lun0 is part of the WAFL filesystem mounted at /vol/vol0.
  
The FlexVolExtents are stored in the repository using the mapper org.eclipse.aperi.infrastructure.mapping.SMISONTAP_FlexVolExtentToDBMapper. The ONTAP_ConcreteExtent are stored using the mapper org.eclipse.aperi.infrastructure.mapping.SMISONTAP_ConcreteExtentToDBMapper.
+
==Gui and Data Server Implementation Details==
  
The LUNs and the [[#volume|volumes]] they are part of will be correlated using the LUNs' names, volumes mount point and the serial number of their storage subsystem. The LUN names are in fact absolute file names of which prefix is the mount point of the volume. For example the LUN named /vol/vol0/lun0 is part of the WAFL filesystem mounted at /vol/vol0.
+
The class org.eclipse.aperi.TStorm.server.guireq.GuiReportReq will be extended to support a new type of report, which will have five subtypes:
 
+
==High level description of the report==
+
I imagine the report as a series of drilldown reports structured on four levels. 
+
 
+
#The top level report displays general information about a NetApp storage subsystem. A row in this report has the following columns:
+
##two drilldown icons  for accessing "Volume Details" and "Aggregate Details" reports
+
##"[[#filer|Filer]] Name"
+
##"Capacity: Row Storage"              - total disk capacities
+
##"Capacity: Assigned Storage"        - sum of disk capacities already assigned to [[#aggr|aggregates]]
+
##"Capacity: Free Space on Aggregates" - space that has not been consumed by the existing [[#flexv|FlexVols]] and can be used to create new FlexVols. A [[#tradv|traditional volume]] consumes all the space in its [[#aggr|aggregate]].
+
##"Capacity: Available Storage"        - sum of [[#hotsparedisk|hot spare disk]] capacities
+
#Second level reports
+
##Volume Details.  Each row corresponds to a [[#volume|volume]] and contains the following columns:
+
###one drilldown icon for accessing "Volume Content"  report
+
###"Filer Name"
+
###"Volume Mount Point"
+
###"Volume Type"                    - flexible or traditional
+
###"Total Volume Space"              - 
+
###"Free Space"                      - free space on the volume
+
###"Space Occupied by LUNs"          - total space assigned to LUNs
+
###"Number of LUNs"                  - number of LUNs created on this volume
+
###"Number of Share"                - total number of files shares 
+
###"Aggregate the Volume Resides On"
+
##Aggregate Details. The columns of this report are:
+
###"Filer Name"
+
###"Aggregate Name",
+
###"Aggregate Raid Level"
+
###"Aggregate Total Space"
+
###"Aggregate Free Space" 
+
###"Number of disks in the Aggregate"
+
#Volume Content. Describes the [[#fileshare|file shares]] and LUNs that are part of the [[#volume|volume]]. The columns of this report are:
+
##drilldown icon that will be activated only if the row represents a LUN that is mapped to hosts. This icon will drilldown to the "LUNs Mapping Details"
+
##"Filer Name"
+
##"Volume Mount Point"
+
##"LUN / Share Name"  - LUN name, which is an absolute file name or the export name ( the name with which a file/directory is exported/shared )
+
##"Protocol"          - CIFS or NFS for shares and iSCSI or FC for LUNs. A file can be shared using both CIFS and NFS and a LUN can be mapped both through FC and iSCSI.
+
##"Allocated Space"    - for exported files and directories this will be "N/A" for LUNs will represent the LUN's capacity
+
#LUNs Mapping Details. Displays what FC initiator ports or iSCSI initiator nodes have access to this LUN. 
+
##"Filer Name"
+
##"Volume Mount Point"
+
##"LUN Name"
+
##"Protocol"
+
##"Host Node / Port name"
+
 
+
==Implementation details==
+
===Device Server===
+
Steps "collectSAPAvailableForElement" and "collectDeviceSAPImplementation" will be moved before "collectProtocollControllerForUnit", in org.eclipse.aperi.disk.collection.ProbeGenericMaskingMappingProcess. Side effects must be analyzed.
+
 
+
I need to analyze further the side effects caused by storing the iSCSI node names in T_RES_PORT.
+
===Gui and Data Server===
+
The class org.eclipse.aperi.TStorm.server.guireq.GuiReportReq will be extended to support a new type of report, which will have five subtypes:  
+
  
 
<code>
 
<code>
Line 147: Line 128:
  
 
public static final int NETAPP_UNIFIED_STORAGE = 88
 
public static final int NETAPP_UNIFIED_STORAGE = 88
 
/**  NETAPP_UNIFIED_STORAGE subreports */
 
  
public static final int NETAPP_UNIFIED_STORAGE_MAIN        = 112;
+
/** NETAPP_UNIFIED_STORAGE subreports */
  
public static final int NETAPP_UNIFIED_STORAGE_VOLUMES      = 113;
+
public static final int NETAPP_UNIFIED_STORAGE_MAIN = 112;
  
public static final int NETAPP_UNIFIED_STORAGE_AGGR        = 114;
+
public static final int NETAPP_UNIFIED_STORAGE_VOLUMES = 113;
  
public static final int NETAPP_UNIFIED_STORAGE_VOLCONTENT  = 115;
+
public static final int NETAPP_UNIFIED_STORAGE_AGGR = 114;
  
public static final int NETAPP_UNIFIED_STORAGE_LUNS         = 116;
+
public static final int NETAPP_UNIFIED_STORAGE_VOLCONTENT = 115;
 +
 
 +
public static final int NETAPP_UNIFIED_STORAGE_LUNS = 116;
 
</code>
 
</code>
  
 
Following classes will be added to support the new report:
 
Following classes will be added to support the new report:
:org.eclipse.aperi.TStorm.gui.NetAppUnifiedStorageTable             - GUI tables for displaying the reports
+
:org.eclipse.aperi.TStorm.gui.NetAppUnifiedStorageTable - GUI tables for displaying the reports  
:org.eclipse.aperi.TStorm.common.NetAppUnifiedStorageAdjuster       - is the data model behind NetAppUnifiedStorageTable
+
:org.eclipse.aperi.TStorm.common.NetAppUnifiedStorageAdjuster - is the data model behind NetAppUnifiedStorageTable  
:org.eclipse.aperi.repository.report.RptNetAppUnifiedStorageReport - this class is instantiated in DataServer and executes the queries against the database to extract the data. In this class the queries for each subtype of the NetaApp Unified Storage report are defined and executed.  
+
:org.eclipse.aperi.repository.report.RptNetAppUnifiedStorageReport - this class is instantiated in DataServer and executes the queries against the database to extract the data. In this class the queries for each subtype of the NetApp Unified Storage report are defined and executed.  
 +
 
 +
A new serializable object is defined to transfer data between Data Server and GUI: org.eclipse.aperi.TStorm.server.guireq.RespNetAppUnifiedStorageReport. 
  
New serialaizible objects will be defined only if needed to transfer data between Data Server and GUI.
 
(I will add  details here as I develop the code)
 
  
 
==Glossary==
 
==Glossary==
 
This document blends together  NetApp, SMI-S  and Aperi terms and  I hope that this glossary will clarify their intended meaning .
 
This document blends together  NetApp, SMI-S  and Aperi terms and  I hope that this glossary will clarify their intended meaning .
  
;<span id="aggr"></span>aggregate :It is the highest level that [[#ontap|Data ONTAP]] uses for grouping disks. Lower levels are [[#plex|plexes]] and RAID groups, which are the  basis for an aggregate. A [[#volume|volume]] is created on an aggregate. An aggregate is described by NetApp SMI-S agent using an instance of CIM_StoragePool and is represented as a storage pool in Aperi.
+
;<span id="aggr"></span>aggregate :It is the highest level that [[#ontap|Data ONTAP]] uses for grouping disks. Lower levels are [[#plex|plexes]] and RAID groups, which are the  basis for an aggregate. A [[#volume|volume]] is created on an aggregate. An aggregate is represented as a CIM_StoragePool instance and it appears in Aperi as a storage pool.
  
;<span id="filer"></span>filer :A shorthand  for a NetApp storage system
+
;<span id="filer"></span>filer :A shorthand  for a FAS or N-series Unified Storage System.
  
 
;<span id="fileshare"></span>file share :A directory or a file that is made available by a filer to clients through NFS or CIFS protocol
 
;<span id="fileshare"></span>file share :A directory or a file that is made available by a filer to clients through NFS or CIFS protocol
  
 
;<span id="flexv"></span>FlexVol :A FlexVol is a type of [[#volume|volume]] that shares its containing [[#aggr|aggregate]] with many other FlexVols.  
 
;<span id="flexv"></span>FlexVol :A FlexVol is a type of [[#volume|volume]] that shares its containing [[#aggr|aggregate]] with many other FlexVols.  
 
;<span id="hotsparedisk"></span>hot spare disk :It is a disk that has not been added yet to any [[#aggr|aggregates]] and can be used to extend an aggregate, to create a new aggregates or to replace a failed disk in an aggregate. [[#ontap|Data Ontap]] replaces automatically a failed disk with one of the hot spare disks.
 
  
 
;<span id="lun"></span>LUN  :Logical unit of storage. Storage Volume and LUNs have equivalent meaning.
 
;<span id="lun"></span>LUN  :Logical unit of storage. Storage Volume and LUNs have equivalent meaning.
  
;<span id="lunms"></span>LUN masking
+
;<span id="ontap"></span>Data ONTAP :It is the operating system which manages a FAS or N-series Unified Storage System.
 
+
;<span id="lunmp"></span>LUN mapping
+
 
+
;<span id="ontap"></span>Data ONTAP :It is the operating system which manages a NetApp storage system.
+
  
 
;<span id="plex"></span>plex :It is an assembly of RAID groups. [[#ontap|Data ONTAP]] manages mirroring at plex level. An [[#aggr|aggregate]] has only one plex when it is unmirrored and two plexes when it is mirrored.   
 
;<span id="plex"></span>plex :It is an assembly of RAID groups. [[#ontap|Data ONTAP]] manages mirroring at plex level. An [[#aggr|aggregate]] has only one plex when it is unmirrored and two plexes when it is mirrored.   

Latest revision as of 09:34, 23 September 2007

Create a new Unified Storage Appliance Report - Requirements

NetApp devices represent themselves in three different ways (depending on customer usage). Combined, these are referred to as a Unified Storage Appliance. The three representations include:

  • Traditional NAS filer (appliance storage is presented to clients as mounted file shares)
  • FC block storage array (appliance provides FC attached blocks/volumes and looks just like a storage subsystem from this view)
  • iSCSI block storage array (appliance provides iSCSI attached target volumes)

To support a Unified Storage Appliance report, Aperi must gather information for all three representations.

The Aperi data component gathers information on NetApp from three sources:

  • Aperi data agent which sees the mounted file systems or Shares via NFS or CIFS views from a client perspective
  • NetApp SNMP MIB from which we get some basic, high level asset and simple capacity information on the whole NetApp box.
  • NetApp SMI-S agent

There is a chance that this technique may produce inaccurate results.

The Unified Storage View needs to show the following type of capacity information:

  1. Show Appliance capacities - raw, used, reserved storage
  2. Total storage capacity used for NAS, iSCSI, FCP Blocks, snapshot, snapshot reserve, free, and unallocated.
  3. Asset Information will already be available through information gathered in NAS Self-Contained Profile
  4. RAID configuration for aggregates

Note that all of the above information will be available directly through SMI-S extension(s) from NetApp. We will use our existing SMI-S support structure as a base and gather information from the new NetApp vendor specific profile. The intent is to create a new Unified Storage Appliance Report that simply shows any and all information that NetApp gives in the vendor specific profile. We may wish to augment the information provided in the new profile with data we gather from the Array Profile and the SMI-S Self-Contained NAS Profile.

Report Description

There will be a series of drill down reports structured on four levels.

  1. The top level report displays general information about a NetApp storage subsystem. A row in this report has the following columns:
    1. two drill down icons for accessing "Volume Details" and "Aggregate Details" reports
    2. "Filer Name"
    3. "Raw Storage" - total disk capacities
    4. "Assigned Storage" - storage capacity assigned to aggregates. This is calculated as sum of TotalManagedSpace for all the concrete pools of a filer.
    5. "Free Space on Aggregates" - space that has not been consumed by the existing FlexVols and can be used to create new FlexVols. This is the sum of RemainingManagedSpace for all the concrete pools of a filer. A traditional volume consumes all the space in its aggregate.
    6. "Spare Space" - total space on spare disks
    7. "Spare Disks" - total number of spare disks
    8. "Files Space" - total space occupied by files
    9. "LUNs Space" - total space occupied by LUNs
    10. "Snapshot Area Space" - total space occupied by snapshots
    11. "Free Space on Volumes" - total free space on all volumes
    12. "RAID Overhead" - space reserved for RAID overhead
  2. Second level reports
    1. Volume Details. Each row corresponds to a volume and contains the following columns:
      1. one drill down icon for accessing "Volume Content: LUNs and Shares" report
      2. "Filer Name"
      3. "Volume Mount Point"
      4. "Volume Type" - flexible or traditional.Will be determined based on the StorageExtent type of the volume.
      5. "Total Volume Space" - Collected and displayed here from CIM_LocalFileSystem.FileSystemSize
      6. "Free Space" - free space on the volume Collected from CIM_LocalFileSystem.AvailableSpace
      7. "Number of LUNs" - number of LUNs created on this volume
      8. "Number of Share" - total number of File Shares
      9. "Files Space" - space occupied by files in this volume
      10. "LUNs Space" - space occupied by LUNs in this volume
      11. "Snapshot Area Space" - space occupied by snapshots in this volume
      12. "Aggregate the Volume Resides On"
    2. Aggregate Details. The columns of this report are:
      1. "Filer Name"
      2. "Aggregate Name",
      3. "Aggregate Raid Level"
      4. "Aggregate Total Space" - is collected from CIM_StoragePool.TotalManagedSpace, where CIM_StoragePool is a concrete pool representing this aggregate.
      5. "Aggregate Free Space" - if this aggregate contains FlexVols this value is the space that can be used for the creation of new volumes. If it contains a traditional volume, this column is 0. Collected from CIM_StoragePool.RemainingManagedSpace
      6. "Number of disks in the Aggregate"
      7. "Files Space" - space occupied by the files in the volumes that resides on the aggregate
      8. "LUNs Space" - space occupied by LUNs in the volumes that resides on the aggregate
      9. "Snapshot Area Space" - space occupied by snapshots in the volumes that resides on the aggregate
      10. "Free Space on Volumes" - total free space in the volumes of the aggregate
  3. "Volume Content: LUNs and Shares". Describes the file shares and LUNs that are part of the volume. The columns of this report are:
    1. drill down icon that will be activated only if the row represents a LUN that is mapped to hosts. This icon will drill down to the "LUN Mapping Details"
    2. "Filer Name"
    3. "Volume Mount Point"
    4. "LUN/Share Name" - LUN name, which is an absolute file name or the export name ( the name with which a file/directory is exported/shared )
    5. "Protocol" - CIFS or/and NFS for shares and iSCSI or/and FC for LUNs. A file can be shared using both CIFS and NFS and a LUN can be mapped both through FC and iSCSI.
    6. "Allocated Space" - for exported files and directories this will be "N/A", for LUNs will represent the LUN's capacity
  4. "LUN Mapping Details". Displays the FC initiator ports or iSCSI initiator nodes that have access to this LUN.
    1. "Filer Name"
    2. "Volume Mount Point"
    3. "LUN Name"
    4. "Protocol"
    5. "Host Node/Port name"

The report is located in Data Manager -> Reporting -> Asset -> System-wide -> NetApp Unified Storage Report. The report will contain data only after probing the NetApp filers using the SMI-S agent. That is by creating probes for the filers that appear under Storage Subsystems in Aperi Storage Manager -> Monitoring -> Probes -> Create Probe.

SMI-S agent data collection (Device Server)

Part of the data needed by this report is collected by the implementation of "Array" and "NAS Self Contained" profiles. New steps will be added to collect data from NetApp vendor specific sub-profile.

LUNs masking and mapping

Paths between iSCSI/FC initiators, LUNs and target ports are collected as part of LUN Masking and Mapping subprofile.

In the case of the iSCSI protocol the paths between LUNs and the initiator nodes are not saved in T_RES_DATA_PATH. This problem seems to get solved if the steps "collectSAPAvailableForElement" and "collectDeviceSAPImplementation" are moved before "collectProtocollControllerForUnit" in org.eclipse.aperi.disk.collection.ProbeGenericMaskingMappingProcess. Side effects must be analyzed

The main classes that implement LUNs Masking and Mapping subprofile are:

org.eclipse.aperi.disk.collection.ProbeGenericMaskingMappingProcess - defines and executes the steps necessary for collecting the instances in LUN Masking and Mapping profile
org.eclipse.aperi.disk.collection.MappingMaskingProcessor - collects data from the steps executed be ProbeGenericMaskingMappingProcess and saves it in the repository by calling appropriate mappers

The collected data is put in T_RES_CLIENT_SETTING, T_RES_MASKING_INFO, T_RES_PORT and T_RES_DATA_PATH. The iSCSI node names are stored in T_RES_PORT. Side effects must be analyzed. T_RES_Port.FormatName is always 1. Should be changed to reflect the different types of names stored in this table ( FC WWN, iSCSI node names)

Correlation between Array Profile entities (storage extents, Pools, LUNs) and Self Contained NAS entities (filesystems)

The aggregate a volume resides on, will be found using storage extents, skipping over the logical disk (see schema below).

A full path from a Filesystem (i.e volume which can be traditional or FlexVol) to its containing storage pool ( aggregate ) is represented in SMI-S specification in the following way:

CIM_FileSystem ------- CIM_ResidesOnExtent ------- CIM_LogicalDisk
CIM_LogicalDisk ------ CIM_BasedOn --------------- CIM_StorageExtent
CIM_StorageExtent ---- CIM_ConcreteComponent ----- CIM_StoragePool

The path between CIM_FileSystem and CIM_StorageExtent can be shortcuted, in this particular case using the condition: CIM_FileSystem.Root == '/vol/'+ CIM_StorageExtent.Name .

CIM_StorageExtent(s) and their associations to CIM_StoragePool(s) are collected by the Array Profile implementation and stored in T_RES_STORAGE_EXTENT and T_RES_STORAGE_POOL.

Instances of CIM_FileSystem will be collected by the NAS Self-Contained Profile implementation.

The storage extents containing volumes are collected in the step named "collectExtentsFromPool", which is a sub step of "collectPoolsFromComputerSystem" (see org.eclipse.aperi.disk.collection.ProbeNetAppSubsystemProcess). The association used is CIM_StoragePool (starting point):CIM_ConcreteComponent (association):CIM_StorageExtent (final point).

The LUNs and the volumes they are part of will be correlated using the LUNs' names, volumes mount point and the serial number of their storage subsystem. The LUN names are in fact absolute file names of which prefix is the mount point of the volume. For example the LUN named /vol/vol0/lun0 is part of the WAFL filesystem mounted at /vol/vol0.

Gui and Data Server Implementation Details

The class org.eclipse.aperi.TStorm.server.guireq.GuiReportReq will be extended to support a new type of report, which will have five subtypes:

/** New Report type */

public static final int NETAPP_UNIFIED_STORAGE = 88

/** NETAPP_UNIFIED_STORAGE subreports */

public static final int NETAPP_UNIFIED_STORAGE_MAIN = 112;

public static final int NETAPP_UNIFIED_STORAGE_VOLUMES = 113;

public static final int NETAPP_UNIFIED_STORAGE_AGGR = 114;

public static final int NETAPP_UNIFIED_STORAGE_VOLCONTENT = 115;

public static final int NETAPP_UNIFIED_STORAGE_LUNS = 116;

Following classes will be added to support the new report:

org.eclipse.aperi.TStorm.gui.NetAppUnifiedStorageTable - GUI tables for displaying the reports
org.eclipse.aperi.TStorm.common.NetAppUnifiedStorageAdjuster - is the data model behind NetAppUnifiedStorageTable
org.eclipse.aperi.repository.report.RptNetAppUnifiedStorageReport - this class is instantiated in DataServer and executes the queries against the database to extract the data. In this class the queries for each subtype of the NetApp Unified Storage report are defined and executed.

A new serializable object is defined to transfer data between Data Server and GUI: org.eclipse.aperi.TStorm.server.guireq.RespNetAppUnifiedStorageReport.


Glossary

This document blends together NetApp, SMI-S and Aperi terms and I hope that this glossary will clarify their intended meaning .

aggregate 
It is the highest level that Data ONTAP uses for grouping disks. Lower levels are plexes and RAID groups, which are the basis for an aggregate. A volume is created on an aggregate. An aggregate is represented as a CIM_StoragePool instance and it appears in Aperi as a storage pool.
filer 
A shorthand for a FAS or N-series Unified Storage System.
file share 
A directory or a file that is made available by a filer to clients through NFS or CIFS protocol
FlexVol 
A FlexVol is a type of volume that shares its containing aggregate with many other FlexVols.
LUN  
Logical unit of storage. Storage Volume and LUNs have equivalent meaning.
Data ONTAP 
It is the operating system which manages a FAS or N-series Unified Storage System.
plex 
It is an assembly of RAID groups. Data ONTAP manages mirroring at plex level. An aggregate has only one plex when it is unmirrored and two plexes when it is mirrored.
Traditional Volume 
A traditional volume is a volume that occupies by itself an entire aggregate.
volume 
It is a Data ONTAP file system of type WAFL. The LUNs are created on a volume as files but exposed as block storage space. The file shares are also part of a volume. gives to us in the vendor specific profile. A volume is represented by NetApp SMI-S agent using an instance of CIM_FileSystem.

Back to the top