Which of the following commands creates a new logical volume in a volume group in Linux?

This chapter describes the principles behind Logical Volume Manager (LVM) and its basic features that make it useful under many circumstances. The YaST LVM configuration can be reached from the YaST Expert Partitioner. This partitioning tool enables you to edit and delete existing partitions and create new ones that should be used with LVM.

Which of the following commands creates a new logical volume in a volume group in Linux?

Warning: Risks

Using LVM might be associated with increased risk, such as data loss. Risks also include application crashes, power failures, and faulty commands. Save your data before implementing LVM or reconfiguring volumes. Never work without a backup.

5.1 Understanding the logical volume manager

  

LVM enables flexible distribution of hard disk space over several physical volumes (hard disks, partitions, LUNs). It was developed because the need to change the segmentation of hard disk space might arise only after the initial partitioning has already been done during installation. Because it is difficult to modify partitions on a running system, LVM provides a virtual pool (volume group or VG) of storage space from which logical volumes (LVs) can be created as needed. The operating system accesses these LVs instead of the physical partitions. Volume groups can span more than one disk, so that several disks or parts of them can constitute one single VG. In this way, LVM provides a kind of abstraction from the physical disk space that allows its segmentation to be changed in a much easier and safer way than through physical repartitioning.

compares physical partitioning (left) with LVM segmentation (right). On the left side, one single disk has been divided into three physical partitions (PART), each with a mount point (MP) assigned so that the operating system can access them. On the right side, two disks have been divided into two and three physical partitions each. Two LVM volume groups (VG 1 and VG 2) have been defined. VG 1 contains two partitions from DISK 1 and one from DISK 2. VG 2 contains the remaining two partitions from DISK 2.

Which of the following commands creates a new logical volume in a volume group in Linux?

Figure 5.1: Physical partitioning versus LVM

  

In LVM, the physical disk partitions that are incorporated in a volume group are called physical volumes (PVs). Within the volume groups in , four logical volumes (LV 1 through LV 4) have been defined, which can be used by the operating system via the associated mount points (MP). The border between different logical volumes need not be aligned with any partition border. See the border between LV 1 and LV 2 in this example.

LVM features:

  • Several hard disks or partitions can be combined in a large logical volume.

  • Provided the configuration is suitable, an LV (such as

    > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
    root's password:
      VG   LV    PV         PE Ranges
                 /dev/sda1
      DATA DEVEL /dev/sda5  /dev/sda5:0-3839
      DATA       /dev/sda5
      DATA LOCAL /dev/sda6  /dev/sda6:0-2559
      DATA       /dev/sda7
      DATA       /dev/sdb1
      DATA       /dev/sdc1
    8) can be enlarged when the free space is exhausted.

  • Using LVM, it is possible to add hard disks or LVs in a running system. However, this requires hotpluggable hardware that is capable of such actions.

  • It is possible to activate a striping mode that distributes the data stream of a logical volume over several physical volumes. If these physical volumes reside on different disks, this can improve the reading and writing performance like RAID 0.

  • The snapshot feature enables consistent backups (especially for servers) in the running system.

Which of the following commands creates a new logical volume in a volume group in Linux?

Note: LVM and RAID

Even though LVM also supports RAID levels 0, 1, 4, 5 and 6, we recommend using

> sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
root's password:
  VG   LV    PV         PE Ranges
             /dev/sda1
  DATA DEVEL /dev/sda5  /dev/sda5:0-3839
  DATA       /dev/sda5
  DATA LOCAL /dev/sda6  /dev/sda6:0-2559
  DATA       /dev/sda7
  DATA       /dev/sdb1
  DATA       /dev/sdc1
9 (see Chapter 7, Software RAID configuration). However, LVM works fine with RAID 0 and 1, as RAID 0 is similar to common logical volume management (individual logical blocks are mapped onto blocks on the physical devices). LVM used on top of RAID 1 can keep track of mirror synchronization and is fully able to manage the synchronization process. With higher RAID levels you need a management daemon that monitors the states of attached disks and can inform administrators if there is a problem in the disk array. LVM includes such a daemon, but in exceptional situations such as a device failure, the daemon does not work properly.

Warning: IBM Z: LVM root file system

If you configure the system with a root file system on LVM or software RAID array, you must place

> vgdisplay VG_NAME | grep "Total PE"
0 on a separate, non-LVM or non-RAID partition, otherwise the system will fail to boot. The recommended size for such a partition is 500 MB and the recommended file system is Ext4.

With these features, using LVM already makes sense for heavily-used home PCs or small servers. If you have a growing data stock, as in the case of databases, music archives, or user directories, LVM is especially useful. It allows file systems that are larger than the physical hard disk. However, keep in mind that working with LVM is different from working with conventional partitions.

You can manage new or existing LVM storage objects by using the YaST Partitioner. Instructions and further information about configuring LVM are available in the official LVM HOWTO.

5.2 Creating volume groups

  

An LVM volume group (VG) organizes the Linux LVM partitions into a logical pool of space. You can carve out logical volumes from the available space in the group. The Linux LVM partitions in a group can be on the same or different disks. You can add partitions or entire disks to expand the size of the group.

To use an entire disk, it must not contain any partitions. When using partitions, they must not be mounted. YaST will automatically change their partition type to

> vgdisplay VG_NAME | grep "Total PE"
1 when adding them to a VG.

  1. Launch YaST and open the Partitioner.

  2. In case you need to reconfigure your existing partitioning setup, proceed as follows. Refer to for details. Skip this step if you only want to use unused disks or partitions that already exist.

    Warning: Physical volumes on unpartitioned disks

    You can use an unpartitioned disk as a physical volume (PV) if that disk is not the one where the operating system is installed and from which it boots.

    As unpartitioned disks appear as unused at the system level, they can easily be overwritten or wrongly accessed.

    1. To use an entire hard disk that already contains partitions, delete all partitions on that disk.

    2. To use a partition that is currently mounted, unmount it.

  3. In the left panel, select Volume Management.

    A list of existing Volume Groups opens in the right panel.

  4. At the lower left of the Volume Management page, click Add Volume Group.

    Which of the following commands creates a new logical volume in a volume group in Linux?

  5. Define the volume group as follows:

    1. Specify the Volume Group Name.

      If you are creating a volume group at install time, the name

      > vgdisplay VG_NAME | grep "Total PE"
      2 is suggested for a volume group that will contain the SUSE Linux Enterprise Server system files.

    2. Specify the Physical Extent Size.

      The Physical Extent Size defines the size of a physical block in the volume group. All the disk space in a volume group is handled in chunks of this size. Values can be from 1 KB to 16 GB in powers of 2. This value is normally set to 4 MB.

      In LVM1, a 4 MB physical extent allowed a maximum LV size of 256 GB because it supports only up to 65534 extents per LV. LVM2, which is used on SUSE Linux Enterprise Server, does not restrict the number of physical extents. Having many extents has no impact on I/O performance to the logical volume, but it slows down the LVM tools.

      Which of the following commands creates a new logical volume in a volume group in Linux?

      Important: Physical extent sizes

      Different physical extent sizes should not be mixed in a single VG. The extent should not be modified after the initial setup.

    3. In the Available Physical Volumes list, select the Linux LVM partitions that you want to make part of this volume group, then click Add to move them to the Selected Physical Volumes list.

    4. Click Finish.

      The new group appears in the Volume Groups list.

  6. On the Volume Management page, click Next, verify that the new volume group is listed, then click Finish.

  7. To check which physical devices are part of the volume group, open the YaST Partitioner at any time in the running system and click Volume Management › Edit › Physical Devices. Leave this screen with Abort.

    Which of the following commands creates a new logical volume in a volume group in Linux?

    Figure 5.2: Physical volumes in the volume group named DATA

      

5.3 Creating logical volumes

  

A logical volume provides a pool of space similar to what a hard disk does. To make this space usable, you need to define logical volumes. A logical volume is similar to a regular partition—you can format and mount it.

Use The YaST Partitioner to create logical volumes from an existing volume group. Assign at least one logical volume to each volume group. You can create new logical volumes as needed until all free space in the volume group has been exhausted. An LVM logical volume can optionally be thinly provisioned, allowing you to create logical volumes with sizes that overbook the available free space (see for more information).

  • Normal volume:  (Default) The volume’s space is allocated immediately.

  • Thin pool:  The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.

  • Thin volume:  The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.

  • Mirrored volume:  The volume is created with a defined count of mirrors.

Procedure 5.1: Setting up a logical volume

  

  1. Launch YaST and open the Partitioner.

  2. In the left panel, select Volume Management. A list of existing Volume Groups opens in the right panel.

  3. Select the volume group in which you want to create the volume and choose Logical Volumes › Add Logical Volume.

  4. Provide a Name for the volume and choose Normal Volume (refer to for setting up thinly provisioned volumes). Proceed with Next.

    Which of the following commands creates a new logical volume in a volume group in Linux?

  5. Specify the size of the volume and whether to use multiple stripes.

    Using a striped volume, the data will be distributed among several physical volumes. If these physical volumes reside on different hard disks, this generally results in a better reading and writing performance (like RAID 0). The maximum number of available stripes is equal to the number of physical volumes. The default (

    > vgdisplay VG_NAME | grep "Total PE"
    3 is to not use multiple stripes.

    Which of the following commands creates a new logical volume in a volume group in Linux?

  6. Choose a Role for the volume. Your choice here only affects the default values for the upcoming dialog. They can be changed in the next step. If in doubt, choose Raw Volume (Unformatted).

    Which of the following commands creates a new logical volume in a volume group in Linux?

  7. Under Formatting Options, select Format Partition, then select the File system. The content of the Options menu depends on the file system. Usually there is no need to change the defaults.

    Under Mounting Options, select Mount partition, then select the mount point. Click Fstab Options to add special mounting options for the volume.

  8. Click Finish.

  9. Click Next, verify that the changes are listed, then click Finish.

5.3.1 Thinly provisioned logical volumes

  

An LVM logical volume can optionally be thinly provisioned. Thin provisioning allows you to create logical volumes with sizes that overbook the available free space. You create a thin pool that contains unused space reserved for use with an arbitrary number of thin volumes. A thin volume is created as a sparse volume and space is allocated from a thin pool as needed. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Thinly provisioned volumes also support snapshots which can be managed with Snapper—see Chapter 7, System recovery and snapshot management with Snapper for more information.

To set up a thinly provisioned logical volume, proceed as described in . When it comes to choosing the volume type, do not choose Normal Volume, but rather Thin Volume or Thin Pool.

Thin pool

The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.

Thin volume

The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.

Important: Thinly provisioned volumes in a cluster

To use thinly provisioned volumes in a cluster, the thin pool and the thin volumes that use it must be managed in a single cluster resource. This allows the thin volumes and thin pool to always be mounted exclusively on the same node.

5.3.2 Creating mirrored volumes

  

A logical volume can be created with several mirrors. LVM ensures that data written to an underlying physical volume is mirrored onto a different physical volume. Thus even though a physical volume crashes, you can still access the data on the logical volume. LVM also keeps a log file to manage the synchronization process. The log contains information about which volume regions are currently undergoing synchronization with mirrors. By default the log is stored on disk and if possible on a different disk than are the mirrors. But you may specify a different location for the log, for example volatile memory.

Currently there are two types of mirror implementation available: "normal" (non-raid)

> vgdisplay VG_NAME | grep "Total PE"
4 logical volumes and
> vgdisplay VG_NAME | grep "Total PE"
5 logical volumes.

After you create mirrored logical volumes, you can perform standard operations with mirrored logical volumes like activating, extending, and removing.

5.3.2.1 Setting up mirrored non-RAID logical volumes

  

To create a mirrored volume use the

> vgdisplay VG_NAME | grep "Total PE"
6 command. The following example creates a 500 GB logical volume with two mirrors called lv1, which uses a volume group vg1.

> sudo lvcreate -L 500G -m 2 -n lv1 vg1

Such a logical volume is a linear volume (without striping) that provides three copies of the file system. The

> vgdisplay VG_NAME | grep "Total PE"
7 option specifies the count of mirrors. The
> vgdisplay VG_NAME | grep "Total PE"
8 option specifies the size of the logical volumes.

The logical volume is divided into regions of the 512 KB default size. If you need a different size of regions, use the

> vgdisplay VG_NAME | grep "Total PE"
9 option followed by the desired region size in megabytes. Or you can configure the preferred region size by editing the
> sudo lvcreate -L 5G --thinpool myPool LOCAL
0 option in the
> sudo lvcreate -L 5G --thinpool myPool LOCAL
1 file.

5.3.2.2 Setting up
> vgdisplay VG_NAME | grep "Total PE"
5 logical volumes

  

As LVM supports RAID you can implement mirroring by using RAID1. Such implementation provides the following advantages compared to the non-raid mirrors:

  • LVM maintains a fully redundant bitmap area for each mirror image, which increases its fault handling capabilities.

  • Mirror images can be temporarily split from the array and then merged back.

  • The array can handle transient failures.

  • The LVM RAID 1 implementation supports snapshots.

On the other hand, this type of mirroring implementation does not enable to create a logical volume in a clustered volume group.

To create a mirror volume by using RAID, issue the command

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1

where the options/parameters have the following meanings:

  • > sudo lvcreate -L 5G --thinpool myPool LOCAL
    3 - you need to specify
    > vgdisplay VG_NAME | grep "Total PE"
    5, otherwise the command uses the implicit segment type
    > vgdisplay VG_NAME | grep "Total PE"
    4 and creates a non-raid mirror.

  • > sudo lvcreate -L 5G --thinpool myPool LOCAL
    6 - specifies the count of mirrors.

  • > sudo lvcreate -L 5G --thinpool myPool LOCAL
    7 - specifies the size of the logical volume.

  • > sudo lvcreate -L 5G --thinpool myPool LOCAL
    8 - by using this option you specify a name of the logical volume.

  • > sudo lvcreate -L 5G --thinpool myPool LOCAL
    9 - is a name of the volume group used by the logical volume.

LVM creates a logical volume of one extent size for each data volume in the array. If you have two mirrored volumes, LVM creates another two volumes that stores metadata.

After you create a RAID logical volume, you can use the volume in the same way as a common logical volume. You can activate it, extend it, etc.

5.4 Automatically activating non-root LVM volume groups

  

Activation behavior for non-root LVM volume groups is controlled in the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 file and by the auto_activation_volume_list parameter. By default, the parameter is empty and all volumes are activated. To activate only some volume groups, add the names in quotes and separate them with commas, for example:

auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]

If you have defined a list in the auto_activation_volume_list parameter, the following will happen:

  1. Each logical volume is first checked against this list.

  2. If it does not match, the logical volume will not be activated.

By default, non-root LVM volume groups are automatically activated on system restart by Dracut. This parameter allows you to activate all volume groups on system restart, or to activate only specified non-root LVM volume groups.

5.5 Resizing an existing volume group

  

The space provided by a volume group can be expanded at any time in the running system without service interruption by adding physical volumes. This will allow you to add logical volumes to the group or to expand the size of existing volumes as described in .

It is also possible to reduce the size of the volume group by removing physical volumes. YaST only allows to remove physical volumes that are currently unused. To find out which physical volumes are currently in use, run the following command. The partitions (physical volumes) listed in the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
1 column are the ones in use:

> sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
root's password:
  VG   LV    PV         PE Ranges
             /dev/sda1
  DATA DEVEL /dev/sda5  /dev/sda5:0-3839
  DATA       /dev/sda5
  DATA LOCAL /dev/sda6  /dev/sda6:0-2559
  DATA       /dev/sda7
  DATA       /dev/sdb1
  DATA       /dev/sdc1

  1. Launch YaST and open the Partitioner.

  2. In the left panel, select Volume Management. A list of existing Volume Groups opens in the right panel.

  3. Select the volume group you want to change, activate the Physical Volumes tab, then click Change.

    Which of the following commands creates a new logical volume in a volume group in Linux?

  4. Do one of the following:

    • Add:  Expand the size of the volume group by moving one or more physical volumes (LVM partitions) from the Available Physical Volumes list to the Selected Physical Volumes list.

    • Remove:  Reduce the size of the volume group by moving one or more physical volumes (LVM partitions) from the Selected Physical Volumes list to the Available Physical Volumes list.

  5. Click Finish.

  6. Click Next, verify that the changes are listed, then click Finish.

5.6 Resizing a logical volume

  

In case there is unused free space available in the volume group, you can enlarge a logical volume to provide more usable space. You may also reduce the size of a volume to free space in the volume group that can be used by other logical volumes.

Note: Online resizing

When reducing the size of a volume, YaST automatically resizes its file system, too. Whether a volume that is currently mounted can be resized online (that is while being mounted), depends on its file system. Growing the file system online is supported by Btrfs, XFS, Ext3, and Ext4.

Shrinking the file system online is only supported by Btrfs. To shrink the Ext2/3/4 file systems, you need to unmount them. Shrinking volumes formatted with XFS is not possible, since XFS does not support file system shrinking.

  1. Launch YaST and open the Partitioner.

  2. In the left panel, select Volume Management. A list of existing Volume Groups opens in the right panel.

  3. Select the logical volume you want to change, then click Resize.

    Which of the following commands creates a new logical volume in a volume group in Linux?

  4. Set the intended size by using one of the following options:

    • Maximum size.  Expand the size of the logical volume to use all space left in the volume group.

    • Minimum size.  Reduce the size of the logical volume to the size occupied by the data and the file system metadata.

    • Custom size.  Specify the new size for the volume. The value must be within the range of the minimum and maximum values listed above. Use K, M, G, T for Kilobytes, Megabytes, Gigabytes and Terabytes (for example

      > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
      2).

  5. Click OK.

  6. Click Next, verify that the change is listed, then click Finish.

5.7 Deleting a volume group or a logical volume

  

Warning: Data loss

Deleting a volume group destroys all of the data in each of its member partitions. Deleting a logical volume destroys all data stored on the volume.

  1. Launch YaST and open the Partitioner.

  2. In the left panel, select Volume Management. A list of existing volume groups opens in the right panel.

  3. Select the volume group or the logical volume you want to remove and click Delete.

  4. Depending on your choice, warning dialogs are shown. Confirm them with Yes.

  5. Click Next, verify that the deleted volume group is listed—deletion is indicated by a red-colored font—then click Finish.

5.8 Disabling LVM on boot

  

If there is an error on the LVM storage, the scanning of LVM volumes may prevent entering the emergency/rescue shell. Thus, further problem diagnosis is not possible. To disable this scanning in case of an LVM storage failure, you can pass the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
3 option on the kernel command line.

5.9 Using LVM commands

  

For information about using LVM commands, see the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
4 pages for the commands described in the following table. All commands need to be executed with
> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
5 privileges. Either use
> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
6 COMMAND (recommended), or execute them directly as
> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
5.

LVM commands

  

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
8

Initializes a device (such as

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
9) for use by LVM as a physical volume. If there is any file system on the specified device, a warning appears. Bear in mind that
> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
0 checks for existing file systems only if
> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
1 is installed (which is done by default). If
> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
1 is not available,
> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
0 will not produce any warning and you may lose your file system without any warning.

> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
4

Displays information about the LVM physical volume, such as whether it is currently being used in a logical volume.

> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
5

Creates a clustered volume group with one or more specified devices.

> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
6

Configures the mode of volume group activation. You can specify one of the following values:

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    7 - only the logical volumes that are not affected by missing physical volumes can be activated, even though the particular logical volume can tolerate such a failure.

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    8 - is the default activation mode. If there is a sufficient level of redundancy to activate a logical volume, the logical volume can be activated even though some physical volumes are missing.

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    9 - the LVM tries to activate the volume group even though some physical volumes are missing. If a non-redundant logical volume is missing important physical volumes, then the logical volume usually cannot be activated and is handled as an error target.

export DM_DISABLE_UDEV=1
0

Activates (

export DM_DISABLE_UDEV=1
1) or deactivates (
export DM_DISABLE_UDEV=1
2) a volume group and its logical volumes for input/output.

When activating a volume in a cluster, ensure that you use the

export DM_DISABLE_UDEV=1
3 option. This option is used by default in the load script.

export DM_DISABLE_UDEV=1
4

Removes a volume group. Before using this command, remove the logical volumes, then deactivate the volume group.

export DM_DISABLE_UDEV=1
5

Displays information about a specified volume group.

To find the total physical extent of a volume group, enter

> vgdisplay VG_NAME | grep "Total PE"

export DM_DISABLE_UDEV=1
6

Creates a logical volume of the specified size.

export DM_DISABLE_UDEV=1
7

Creates a thin pool named

export DM_DISABLE_UDEV=1
8 of the specified size from the volume group VG_NAME.

The following example creates a thin pool with a size of 5 GB from the volume group

export DM_DISABLE_UDEV=1
9:

> sudo lvcreate -L 5G --thinpool myPool LOCAL

> sudo lvextend -L +SIZE /dev/VG_NAME/LV_NAME
0

Creates a thin logical volume within the pool POOL_NAME. The following example creates a 1GB thin volume named

> sudo lvextend -L +SIZE /dev/VG_NAME/LV_NAME
1 from the pool
export DM_DISABLE_UDEV=1
8 on the volume group
export DM_DISABLE_UDEV=1
9:

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1

> sudo lvextend -L +SIZE /dev/VG_NAME/LV_NAME
4

It is also possible to combine thin pool and thin logical volume creation in one command:

> sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1

> sudo lvextend -L +SIZE /dev/VG_NAME/LV_NAME
5

Configures the mode of logical volume activation. You can specify one of the following values:

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    7 - the logical volume can be activated only if all its physical volumes are active.

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    8 - is the default activation mode. If there is a sufficient level of redundancy to activate a logical volume, the logical volume can be activated even though some physical volumes are missing.

  • > sudo lvcreate -T LOCAL/myPool -V 1G -L 5G -n myThin1
    9 - the LVM tries to activate the volume even though some physical volumes are missing. In this case part of the logical volume may be unavailable and it might cause data loss. This option is typically not used, but might be useful when restoring data.

You can specify the activation mode also in

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 by specifying one of the above described values of the
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
00 configuration option.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
01

Creates a snapshot volume for the specified logical volume. If the size option (

> sudo lvcreate -L 5G --thinpool myPool LOCAL
7 or
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
03) is not included, the snapshot is created as a thin snapshot.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
04

Removes a logical volume.

Before using this command, close the logical volume by unmounting it with the

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
05 command.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
06

Removes a snapshot volume.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
07

Reverts the logical volume to the version of the snapshot.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
08

Adds the specified device (physical volume) to an existing volume group.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
09

Removes a specified physical volume from an existing volume group.

Ensure that the physical volume is not currently being used by a logical volume. If it is, you must move the data to another physical volume by using the

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
10 command.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
11

Extends the size of a specified logical volume. Afterward, you must also expand the file system to take advantage of the newly available space. See Chapter 2, Resizing file systems for details.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
12

Reduces the size of a specified logical volume.

Ensure that you reduce the size of the file system first before shrinking the volume, otherwise you risk losing data. See Chapter 2, Resizing file systems for details.

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
13

Renames an existing LVM logical volume. It does not change the volume group name.

Which of the following commands creates a new logical volume in a volume group in Linux?

Tip: Bypassing udev on volume creation

In case you want to manage LV device nodes and symbolic links by using LVM instead of by using udev rules, you can achieve this by disabling notifications from udev with one of the following methods:

  • Configure

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    14 and
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    15 in
    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0.

    Note that specifying

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    17 with the
    > vgdisplay VG_NAME | grep "Total PE"
    6 command has the same effect as
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    15; setting
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    14 is still required.

  • Setting the environment variable

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    21:

    export DM_DISABLE_UDEV=1

    This will also disable notifications from udev. In addition, all udev related settings from

    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0 will be ignored.

5.9.1 Resizing a logical volume with commands

  

The

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
23,
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
24, and
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
25 commands are used to resize logical volumes. See the man pages for each of these commands for syntax and options information. To extend an LV there must be enough unallocated space available on the VG.

The recommended way to grow or shrink a logical volume is to use the YaST Partitioner. When using YaST, the size of the file system in the volume will automatically be adjusted, too.

LVs can be extended or shrunk manually while they are being used, but this may not be true for a file system on them. Extending or shrinking the LV does not automatically modify the size of file systems in the volume. You must use a different command to grow the file system afterward. For information about resizing file systems, see Chapter 2, Resizing file systems.

Ensure that you use the right sequence when manually resizing an LV:

  • If you extend an LV, you must extend the LV before you attempt to grow the file system.

  • If you shrink an LV, you must shrink the file system before you attempt to shrink the LV.

To extend the size of a logical volume:

  1. Open a terminal.

  2. If the logical volume contains an Ext2 or Ext4 file system, which do not support online growing, dismount it. In case it contains file systems that are hosted for a virtual machine (such as a Xen VM), shut down the VM first.

  3. At the terminal prompt, enter the following command to grow the size of the logical volume:

    > sudo lvextend -L +SIZE /dev/VG_NAME/LV_NAME

    For SIZE, specify the amount of space you want to add to the logical volume, such as 10 GB. Replace

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    26 with the Linux path to the logical volume, such as
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    27. For example:

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    0

  4. Adjust the size of the file system. See Chapter 2, Resizing file systems for details.

  5. In case you have dismounted the file system, mount it again.

For example, to extend an LV with a (mounted and active) Btrfs on it by 10 GB:

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
1

To shrink the size of a logical volume:

  1. Open a terminal.

  2. If the logical volume does not contain a Btrfs file system, dismount it. In case it contains file systems that are hosted for a virtual machine (such as a Xen VM), shut down the VM first. Note that volumes with the XFS file system cannot be reduced in size.

  3. Adjust the size of the file system. See Chapter 2, Resizing file systems for details.

  4. At the terminal prompt, enter the following command to shrink the size of the logical volume to the size of the file system:

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    2

  5. In case you have unmounted the file system, mount it again.

For example, to shrink an LV with a Btrfs on it by 5 GB:

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
3

Tip: Resizing the Volume and the File System with a Single Command

Starting with SUSE Linux Enterprise Server 12 SP1,

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
24,
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
23, and
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
25 support the option
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
31 which will not only change the size of the volume, but will also resize the file system. Therefore the examples for
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
24 and
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
25 shown above can alternatively be run as follows:

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
4

Note that the

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
31 is supported for the following file systems: ext2/3/4, Btrfs, XFS. Resizing Btrfs with this option is currently only available on SUSE Linux Enterprise Server, since it is not yet accepted upstream.

5.9.2 Using LVM cache volumes

  

LVM supports the use of fast block devices (such as an SSD device) as write-back or write-through caches for large slower block devices. The cache logical volume type uses a small and fast LV to improve the performance of a large and slow LV.

To set up LVM caching, you need to create two logical volumes on the caching device. A large one is used for the caching itself, a smaller volume is used to store the caching metadata. These two volumes need to be part of the same volume group as the original volume. When these volumes are created, they need to be converted into a cache pool which needs to be attached to the original volume:

Procedure 5.2: Setting up a cached logical volume

  

  1. Create the original volume (on a slow device) if not already existing.

  2. Add the physical volume (from a fast device) to the same volume group the original volume is part of and create the cache data volume on the physical volume.

  3. Create the cache metadata volume. The size should be 1/1000 of the size of the cache data volume, with a minimum size of 8 MB.

  4. Combine the cache data volume and metadata volume into a cache pool volume:

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    5

  5. Attach the cache pool to the original volume:

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    6

For more information on LVM caching, see the lvmcache(7) man page.

5.10 Tagging LVM2 storage objects

  

A tag is an unordered keyword or term assigned to the metadata of a storage object. Tagging allows you to classify collections of LVM storage objects in ways that you find useful by attaching an unordered list of tags to their metadata.

5.10.1 Using LVM2 tags

  

After you tag the LVM2 storage objects, you can use the tags in commands to accomplish the following tasks:

  • Select LVM objects for processing according to the presence or absence of specific tags.

  • Use tags in the configuration file to control which volume groups and logical volumes are activated on a server.

  • Override settings in a global configuration file by specifying tags in the command.

A tag can be used in place of any command line LVM object reference that accepts:

  • a list of objects

  • a single object as long as the tag expands to a single object

Replacing the object name with a tag is not supported everywhere yet. After the arguments are expanded, duplicate arguments in a list are resolved by removing the duplicate arguments, and retaining the first instance of each argument.

Wherever there might be ambiguity of argument type, you must prefix a tag with the commercial at sign (@) character, such as

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
35. Elsewhere, using the @ prefix is optional.

5.10.2 Requirements for creating LVM2 tags

  

Consider the following requirements when using tags with LVM:

Supported characters

An LVM tag word can contain the ASCII uppercase characters A to Z, lowercase characters a to z, numbers 0 to 9, underscore (_), plus (+), hyphen (-), and period (.). The word cannot begin with a hyphen. The maximum length is 128 characters.

Supported storage objects

You can tag LVM2 physical volumes, volume groups, logical volumes, and logical volume segments. PV tags are stored in its volume group’s metadata. Deleting a volume group also deletes the tags in the orphaned physical volume. Snapshots cannot be tagged, but their origin can be tagged.

LVM1 objects cannot be tagged because the disk format does not support it.

5.10.3 Command line tag syntax

  

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
36TAG_INFO

Add a tag to (or tag) an LVM2 storage object. Example:

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
7

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
37TAG_INFO

Remove a tag from (or untag) an LVM2 storage object. Example:

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
8

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
38TAG_INFO

Specify the tag to use to narrow the list of volume groups or logical volumes to be activated or deactivated.

Enter the following to activate the volume if it has a tag that matches the tag provided (example):

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
9

5.10.4 Configuration file syntax

  

The following sections show example configurations for certain use cases.

5.10.4.1 Enabling host name tags in the
> sudo lvcreate -L 5G --thinpool myPool LOCAL
1 file

  

Add the following code to the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 file to enable host tags that are defined separately on host in a
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
41 file.

auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
0

You place the activation code in the

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
41 file on the host. See .

5.10.4.2 Defining tags for host names in the lvm.conf file

  

auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
1

5.10.4.3 Defining activation

  

You can modify the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 file to activate LVM logical volumes based on tags.

In a text editor, add the following code to the file:

auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
2

Replace

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
44 with your tag. Use
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
45 to match the tag against any tag set on the host.

The activation command matches against VGNAME, VGNAME/LVNAME, or @TAG set in the metadata of volume groups and logical volumes. A volume group or logical volume is activated only if a metadata tag matches. The default if there is no match, is not to activate.

If

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
46 is not present and tags are defined on the host, then it activates the volume group or logical volumes only if a host tag matches a metadata tag.

If

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
46 is defined, but empty, and no tags are defined on the host, then it does not activate.

If volume_list is undefined, it imposes no limits on LV activation (all are allowed).

5.10.4.4 Defining activation in multiple host name configuration files

  

You can use the activation code in a host’s configuration file (

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
48) when host tags are enabled in the
> sudo lvcreate -L 5G --thinpool myPool LOCAL
1 file. For example, a server has two configuration files in the
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
50 directory:

> sudo lvcreate -L 5G --thinpool myPool LOCAL
1
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
52

At start-up, load the

> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 file, and process any tag settings in the file. If any host tags were defined, it loads the related
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
48 file. When it searches for a specific configuration file entry, it searches the host tag file first, then the
> sudo lvcreate -L 5G --thinpool myPool LOCAL
1file, and stops at the first match. Within the
> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
52 file, use the reverse order that tags were set in. This allows the file for the last tag set to be searched first. New tags set in the host tag file will trigger additional configuration file loads.

5.10.5 Using tags for a simple activation control in a cluster

  

You can set up a simple host name activation control by enabling the

> sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
57 option in the
> sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
0 file. Use the same file on every machine in a cluster so that it is a global setting.

  1. In a text editor, add the following code to the

    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0 file:

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    3

  2. Replicate the file to all hosts in the cluster.

  3. From any machine in the cluster, add

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60 to the list of machines that activate
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    61:

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    4

  4. On the

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60 server, enter the following to activate it:

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    5

5.10.6 Using tags to activate on preferred hosts in a cluster

  

The examples in this section demonstrate two methods to accomplish the following:

  • Activate volume group

    > sudo lvcreate -L 5G --thinpool myPool LOCAL
    9 only on the database hosts
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60 and
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    65.

  • Activate volume group

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    66 only on the file server host
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    67.

  • Activate nothing initially on the file server backup host

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    68, but be prepared for it to take over from the file server host
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    67.

5.10.6.1 Option 1: centralized admin and static configuration replicated between hosts

  

In the following solution, the single configuration file is replicated among multiple hosts.

  1. Add the

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    44 tag to the metadata of volume group
    > sudo lvcreate -L 5G --thinpool myPool LOCAL
    9. In a terminal, enter

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    6

  2. Add the

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    72 tag to the metadata of volume group
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    66. In a terminal, enter

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    7

  3. In a text editor, modify the

    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0 file with the following code to define the
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    44,
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    72,
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    77 tags.

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    8

  4. Replicate the modified

    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0 file to the four hosts:
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60,
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    65,
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    67, and
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    68.

  5. If the file server host goes down,

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    66 can be brought up on
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    68 by entering the following commands in a terminal on any node:

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    9

5.10.6.2 Option 2: localized admin and configuration

  

In the following solution, each host holds locally the information about which classes of volume to activate.

  1. Add the

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    44 tag to the metadata of volume group
    > sudo lvcreate -L 5G --thinpool myPool LOCAL
    9. In a terminal, enter

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    6

  2. Add the

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    72 tag to the metadata of volume group
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    66. In a terminal, enter

    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    7

  3. Enable host tags in the

    > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
    0 file:

    1. In a text editor, modify the

      > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
      0 file with the following code to enable host tag configuration files.

      > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
      root's password:
        VG   LV    PV         PE Ranges
                   /dev/sda1
        DATA DEVEL /dev/sda5  /dev/sda5:0-3839
        DATA       /dev/sda5
        DATA LOCAL /dev/sda6  /dev/sda6:0-2559
        DATA       /dev/sda7
        DATA       /dev/sdb1
        DATA       /dev/sdc1
      2

    2. Replicate the modified

      > sudo lvcreate -T LOCAL/myPool -V 1G -n myThin1
      0 file to the four hosts:
      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      60,
      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      65,
      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      67, and
      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      68.

  4. On host

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60, create an activation configuration file for the database host
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    60. In a text editor, create
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    98 file and add the following code:

    > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
    root's password:
      VG   LV    PV         PE Ranges
                 /dev/sda1
      DATA DEVEL /dev/sda5  /dev/sda5:0-3839
      DATA       /dev/sda5
      DATA LOCAL /dev/sda6  /dev/sda6:0-2559
      DATA       /dev/sda7
      DATA       /dev/sdb1
      DATA       /dev/sdc1
    3

  5. On host

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    65, create an activation configuration file for the database host
    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    65. In a text editor, create
    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    01 file and add the following code:

    > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
    root's password:
      VG   LV    PV         PE Ranges
                 /dev/sda1
      DATA DEVEL /dev/sda5  /dev/sda5:0-3839
      DATA       /dev/sda5
      DATA LOCAL /dev/sda6  /dev/sda6:0-2559
      DATA       /dev/sda7
      DATA       /dev/sdb1
      DATA       /dev/sdc1
    3

  6. On host fs1, create an activation configuration file for the file server host

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    67. In a text editor, create
    auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
    03 file and add the following code:

    > sudo pvs -o vg_name,lv_name,pv_name,seg_pe_ranges
    root's password:
      VG   LV    PV         PE Ranges
                 /dev/sda1
      DATA DEVEL /dev/sda5  /dev/sda5:0-3839
      DATA       /dev/sda5
      DATA LOCAL /dev/sda6  /dev/sda6:0-2559
      DATA       /dev/sda7
      DATA       /dev/sdb1
      DATA       /dev/sdc1
    5

  7. If the file server host

    > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
    67 goes down, to bring up a spare file server host fsb1 as a file server:

    1. On host

      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      68, create an activation configuration file for the host
      > sudo lvcreate --type raid1 -m 1 -L 1G -n lv1 vg1
      68. In a text editor, create
      auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
      07 file and add the following code:

      Which of the following commands creates a new logical volume in a volume group?

      To create a logical volume, use the lvcreate command.

      What is logical volume group in Linux?

      A volume group ( VG ) is the central unit of the Logical Volume Manager (LVM) architecture. It is what we create when we combine multiple physical volumes to create a single storage structure, equal to the storage capacity of the combined physical devices.

      How does LVM create logical volume?

      Elastic Compute Service:Use LVM to create a logical volume.
      Step 1: Create physical volumes (PVs) Connect to an ECS instance as the root user. ... .
      Step 2: Create a volume group (VG) Run the following command to create a VG: ... .
      Step 3: Create an LV. Run the following command to create an LV: ... .
      Step 4: Create and mount a file system..

      How to create LVM volume in Linux?

      How to Create LVM Partition in Linux (Step by Step).
      Step 1) Identity Disk and Create Physical Volume (PV) Login to Linux system and look for newly attached disk or free disk. ... .
      Step 2) Create Volume Group (VG) ... .
      Step 3) Create Logical volume (LV) from Volume Group (VG) ... .
      Step 4) Format LVM partition..