Generation of NetApp

Last week I had to install some NetApp systems and migrate data to it. The first NetApp was a FAS2650 Installation was going smooth and after 2 hours, the system was fully up and running. I had to migrate all the data from a FAS 2040, so reconfigure the system and prepare for migration was the last to do that day. Looking at the next day, I had to replace a FAS 2240-2 with a FAS2552. Hmmm, really a week of now you see them, now you don’t. Mostly I work with the larger system, but it’s nice to see that even the smaller system work the same, do the sameRead More

Upgrade UCS Firmware

Recently I was performing an upgrade to a UCS environment to support some new B200 M4 blades. The current firmware version only supported the B200 M3 blades. With UCS firmware there is a specific order to follow during the upgrade process. The order to follow is: Upgrade UCS Manager Upgrade Fabric Interconnect B (subordinate) & Fabric B I/O modules Upgrade Fabric Interconnect A (primary) & Fabric A I/O modules Upgrade Blade firmware by placing each ESXi Host During the upgrade process, and particularly during the Fabric Interconnect and I/O module upgrades you will see a swarm of alerts coming from UCSM. This is expected as some of the links will beRead More

Change disk ownership

What to do if a disk has been inserted into a Netapp Clustered Ontap system and that disk already has a disk ownership assigned to it. Remove the disk ownership and re-assign this disk to the new node. netapp1::> Set diag netapp1::*> disk assign -disk NODE2:2c.10.0 -owner NODE1 -force-owner true If this command does not work or you get the following Error: command failed: Failed to find node ID for node, you can use these 2 commands to remove ownership or modify ownership: netapp1::*> disk removeowner -disk NODE2-2c.10.0 -force true netapp1::*> disk modify -disk NODE2:2c.10.0 -owner NODE2 -force-owner true

Cannot remove node from cluster when auditing active

When you have the issue “Failed to disable cluster-scoped metadata volume support prior to unjoin operation”: clA::*> cluster unjoin -node clA-01 -force true Warning: This command will forcibly unjoin node “clA-01” from the cluster. You must unjoin the failover partner as well. This will permanently remove from the cluster any volumes that remain on that node and any logical interfaces with that node as the home-node. Contact support for additional guidance. Do you want to continue? {y|n}: y [Job 1819] Checking prerequisites Error: command failed: [Job 1819] Job failed: Failed to disable cluster-scoped metadata volume support prior to unjoin operation. Reason: All cluster-scoped volumes are still online. Remove all the volumes onRead More

What if the e0M is connected to the same subnet as data?

Sometime you can have problems like: Network performance over ethernet networks after reboot/maintenance has degraded. Slow backup and transfer issues are reported on a storage system when performing the following backup and transfers: NDMP backup SnapMirror transfer SnapVault transfer Messages and EMS logs will warn with the following messages (IPv4 or/and IPv6 accordingly): [netapp: netmon_main: net.if.mgmt.sameSubnet:warning]: ifconfig: IP address ‘x.x.x.x’ configured on dedicated management port ‘e0M’ is on the same subnet as IP address ‘x.x.x.x’ configured on data port Prod1. Management IP addresses must be on dedicated management subnets. [net.if.mgmt.defaultGateway:warning] route: Static or default route with gateway ‘x.x.x.x’ is targeted to dedicated management interface ‘e0M’. Data traffic using this routeRead More