Diskhotplug support should be available in clouds that have onapp-storage versions >=3.1.2
You can check if diskhotplug support is available by looking at /usr/pythoncontroller/diskhotplug on cloudboot HVs in the customer cloud.
/usr/pythoncontroller/diskhotplug list #will show the disks that are plugged into the various slots on the controllers.
To remove a disk drive from a controller firstly it should be checked that there all the content on the storage node is redundant. Check using getdegradedvdisks to see if there are any non-redundant vdisks. If there are, these will need to be rebalanced such that there is HV redundancy. Alternatively look in the nodes view and list all vdisks. VDisks can be rebalanced away from the HV such that the disk content will remain synched.
As long as there is a good path for all stripe members and they are all redundant vdisk content does not have to be rebalanced away from the storagenode but the diskhotplug unplug operation will degrade the vdisks.
Ensure that there are recent backups of VMs and that customer is happy to unplug the disk.
Once the above checks have been made:
Deselect the drive in the HV edit page from the UI. Save this page. This will stop the drive from being used by storage again. If the drive will be used again then the option should be re-enabled when the disk is re-inserted.
HV> diskhotplug list #will show storage controllers and slots view of disks.
e.g. Slot 0 - /dev/sda (SCSIid:Z2AAL2M41BC14_Z2AAL2M4,NodeID:1337660081)
The NodeID corresponds in this case to the OnApp UUID and the SCSI ID corresponds to the result of onapp_scsi_id.
Can confirm which storage controller this maps to through looking at the storagenode config files at /onappstore/VMconfigs/...
HV> diskhotplug unassign <Controller> <slot> #will deselect the disk drive, close down the paths (degrading any vdisks still present) and then the drive will be unused or show an error. If an error please communicate with storage devs.
The physical drive should have no activity now and can be removed from the HV - if the BIOS supports diskhotplug also.
If the HV does not support live diskhotplug from the physical layer, then the other vdisks on the other storage nodes on the HV will have to be redundant, the VMs migrated away from the HV, and the HV brought down for maintenance.
A new drive can be added.
If this appears as a physical drive in dom0 then the next step will be to format the drive in preparation for onapp (unless already prepared for this cloud).
Add the drive in the HV edit page.
HV> formatandconfigure /dev/<sdX>
HV> diskhotplug assign <controller> <free-slot> /dev/<sdX>
The storage node should then appear in the controller. telnet to the controller and check mount to see that xvdX or vdX is now mounted correctly. The node if mounted correctly will started reporting its OnApp uuid to other HVs. Once all HVs are updated then it can be added to a datastore and used for other ops.