Creating LVM in Linux And Integrating With Hadoop Cluster

Shubham kumar
Nov 13, 2020

--

Task Description 📄

🔅Integrating LVM with Hadoop and providing Elasticity to Data Node Storage
🔅Increase or Decrease the Size of Static Partition in Linux
🔅Automating LVM Partition using Python-Script.

Creating hadoop’s datanode with elasticity we need to give it a lvm folder/volume
SO, First create lvm /logical storage by:-

$ pvcreate /dev/sda #creates pv for /dev/sda disk(can creates pv for multiple disk with same cmd)
$ vgcreate Hadoop-slave /dev/sda #creates vg named Hadoop-slave
$ lvcreate — size 50G — name slave Hadoop-slave #creates lv with 50gb size named slave from Hadoop-slave vg
$ mkfs.ext4 /dev/Hadoop-slave/slave # format the lv with ext4 format
$ mount /dev/Hadoop-slave/slave /datanode # mounted lv with dir named datanode

For extending lv there are two steps:-

$ lvextend –size +5G /dev/Hadoop-slave/slave # extends lv by 5gb

$ resize2fs /dev/Hadoop-slave/slave # format only the newly added 5GB only

For reducing lv there are Five steps:-

$ umount /dev/Hadoop-slave/slave # First unmount the disk (strictly)

$ ec2fsck /dev/Hadoop-slave/slave # clean/scan the inode table

$ resize2fs /dev/Hadoop-slave/slave 40G #format with reduced size i.e 40gb

$ lvreduce –size 40G /dev/Hadoop-slave/slave # reduced lv to 40gb

$ mount /dev/Hadoop-slave/slave /datanode # again mount the disk to dir i.e /datanode

✔✔Automating LVM Partition using Python-Script.

--

--

No responses yet