Opennebula LVM Datastore – The missing link
Recently we struggled somehow on our first Opennebula Cluster with LVM Datastores.
In the past we‘re using ceph as Storage where the integration with Opennebula is well documented and all in all straight forward.
For an new private Cloud of one of our customers descision was taken to choose HP MSA 1040 with SAS-Controller due to limited budget and as massive scaling is not expected in the near future.
So we followed the Opennebula documention but we expirence some aweful lacks of hints how to bring all together.
So here what we‘ve done to get the setup running:
As mentioned in the docs you don‘t need to have a cLVM (Clustered Logical Volume Manager) in Place. LVS metadata is spread by Opennebula, but you have to be careful regarding some conventions.
Create or change Datastores
Take config from original docs
> cat system.conf
NAME = lvm_system
TM_MAD = fs_lvm
TYPE = SYSTEM_DS >
onedatastore create system.conf
ID: 103 cat image.conf
NAME = production
DS_MAD = fs
TM_MAD = fs_lvm
DISK_TYPE = "BLOCK"
TYPE = IMAGE_DS
SAFE_DIRS="/var/tmp /tmp"
> onedatastore create image.conf
ID: 107
Install LVM2 on all nodes
As we‘re running Ubuntu server 16.04LTS
sudo apt install
lvm2
Stop and disable lvm metadata caching daemon lvmetad
sudo systemctl stop
lvmetad.service
sudo systemctl
disable lvmetad.service
Change /etc/lvm/lvm.conf on all nodes
use_lvmetad = 0
Add user oneadmin to group „disk“
sudo moduser -a -G
disk oneadmin
Setup LVM pv, vg
If you like to run System and Image Datastore on LVM create two Physical Volumes. As we have only one RAID we first created to partitions with approbriate sizes with fdisk.
sudo pvcreate
/dev/sdb1
sudo pvcreate
/dev/sdb2
Now the tricky part:
For the system datastore you must create a volume group with the naming convention vg-one-<DatastoreId>. In our case we have 103 as System datastore id
sudo vgcreate
/dev/sdb1 vg-one-103
Create volume group and logical volume for Images
sudo vgcreate
/dev/sdb2 vg-one-107
sudo lvcreate
vg-one-107 images
About the next steps to bring up Image datastore we come back later.
Ensure with vgscan that all hosts in cluster
vgscan
Reading all
physical volumes. This may take a while...
Found volume group
"vg-one-103" using metadata type lvm2
Found volume group
"vg-one-107" using metadata type lvm2
Right now you should have the system datastore in place and ready to run.
Image Datastore
In contrast to system datastore you need to have a file system on image datastore and have something like NFS or GlusterFS in place to make the images accessable for the nodes.
Create filesystem on LV „images“
sudo mkfs.ext4 /dev/vg-one-107/images
Mount the volume e.g.
mount /dev/vg-one-107/image /mnt/images
Add to fstab to make mapping persistant
Install nfs-server on frontend
sudo apt install nfs-kernel-server
Export the image directory
Add the directory and nodes to /etc/exports
/mnt/images node02(rw,sync,no_subtree_check,no_root_squash)
node03(rw,sync,no_subtree_check,no_root_squash)
! ensure too add the no_root_squash option !
Create symblic link to datastore
ln -s /mnt/images /var/lib/one/datastore/107
and change owner to oneadmin
chown -R oneadmin:oneadmin /var/lib/one/datastore/107
NFS Settings on cluster nodes
Install nfs
sudo apt install nfs-common
create mounting point
mkdir /mnt/images
add nfs export to /etc/fstab
192.168.123.123:/mnt/images /mnt/images nfs rw,soft,intr,rsize=32768,wsize=32768 0 0
mount -a
Create symbolic link to datastore
ln -s /mnt/images /var/lib/one/datastore/107
change owner to oneadmin
That‘s all!
You have to keep in mind that you have to change your exports at nfs server in case you add a new node to your cluster.