메뉴 건너뛰기

Korea Oracle User Group

새소식

How to Test OCFS2 in a Virtual Environment

명품관 2015.11.24 15:18 조회 수 : 2221

How to Test OCFS2 in a Virtual Environment 

with Oracle VM VirtualBox

 

by Robert Chase

Setting up an OCFS2 file system in Oracle VM VirtualBox enables you to test failover capabilities and whether an application is suitable for running in a virtualized environment. This article describes how to set up such an environment.

 

Published April 2014


Introduction

For the purposes of development and testing, an Oracle Cluster File System Version 2 (OCFS2) file system can be set up in a virtual environment using Oracle VM VirtualBox on desktop-class hardware. You can build the nodes the same way you would on a production system after some minor command-line interface (CLI) modification of the virtual desktop image (VDI) files. For the purposes of development and testing, a three-node OCFS2 cluster is the most useful for being able to show the interaction between the nodes.

This article is designed to assist with the setup of OCFS2 within Oracle VM VirtualBox and is not a complete reference on how to set up the OCFS2 file system. Please see the "See Also" section at the end of this article for additional information on the specifics of setting up OCFS2.

About Oracle Cluster File System Version 2

OCFS2 is a high-performance, high availability, POSIX-compliant, general-purpose file system for Linux. It is a versatile clustered file system that can be used with applications that are cluster-aware as well as with those that are not cluster-aware. As of 2006, OCFS2 is fully integrated into the mainline Linux kernel and is available on most Linux distributions. Please see the "See Also" section of this article for more information on Oracle's OCFS2 project.

Advantages and Disadvantages of Testing in a Virtual Environment

When testing OCFS2 within Oracle VM VirtualBox it's important to understand some of the strengths and weaknesses of testing in this manner. One of the major strengths is the speed and ease of setting up the testing environment. Testing failover and proof-of-concept for application suitability also works well in a virtualized environment.

However, true redundancy is not available because the nodes are running on a single machine. Also, it's important to keep in mind that a hardware failure on the host system will bring down the testing environment. Performance testing is also not practical due to the lack of physical hardware and networking in the virtual environment.

Configuring Oracle VM VirtualBox

To set up OCFS2 within Oracle VM VirtualBox, you will need to create three Oracle Linux virtual machines (VMs) within Oracle VM VirtualBox. You will need an ISO image of the version of Oracle Linux you wish to install, which can be obtained from Oracle E-Delivery. Once you have downloaded the ISO image, you are ready to start creating VMs.

To use shared storage for OCFS2, you will need to create a single VM with an additional VDI that will be modified later via the command line to be shared. Once this first VM is created and the VDI is modified, you can then build the second two VMs and add the shared storage. The instructions below will guide you through the process.

  1. In Oracle VM VirtualBox, select New to begin the process of creating a VM. A good rule of thumb is to create VMs with 2 GB of RAM and at least 5 GB to 10 GB of disk space. You can use a dynamically allocated VDI file to save disk space on the file system for the host operating system.

    Creating a VM in Oracle VM VirtualBox

    Figure 1. Creating a VM in Oracle VM VirtualBox.

  2. After creating the VM, click Settings and then click Network. Then enable adapter 2 and select Internal Network from the Attached to list, so that adapter 2 will be the OCFS2 dedicated network interface.

    Enabling a dedicated network interface for OCFS2

    Figure 2. Enabling a dedicated network interface for OCFS2.

  3. Click Storage, and select the Oracle Linux ISO image you downloaded earlier to be the boot device by clicking the CD icon (circled in red in Figure 3).

    Selecting the ISO image for booting

    Figure 3. Selecting the ISO image for booting.

  4. To use shared storage with Oracle VM VirtualBox, you need to create a separate VDI file. Click Storage. Create the VDI file as a fixed size; do not use the dynamic allocation option. The size specified for this VDI file should be the size of the storage you want to use for testing. Select the Controller: SATA and then select Add Hard Disk to add the new VDI file to be used as the shared storage.

    Creating a separate VDI file

    Figure 4. Creating a separate VDI file.

  5. Before the VDI file can be shared with all of the VMs in the cluster, you need to make some changes to allow sharing. These changes are done via the command line using the VboxManage utility. Once you have created the VDI file, you need to identify its universally unique identifier (UUID) for use with the VboxManage utility.
     

    On a Windows machine, select Start > Run, type cmd, and press Enter. This will bring you to a command prompt so you can access the VboxManage utility. Then type the following commands to identify the UUID from the VDI file.

    Note: The VboxManage list hdds command also works on Linux and Mac systems. However, on those systems, it is not necessary to change directories, as shown in the following Windows example. The VboxManage utility works the same on both Linux and Mac systems.

    cd C:\Program Files\Oracle\Virtualbox
    VboxManage list hdds
    
    UUID:        702d44e7-c234-421f-880b-335da09d8414
    Parent UUID: base
    Format:      VDI
    Location:    C:\Users\testenv\VirtualBox VMs\OCFS2-1\LUN3.vdi
    State:       created
    Type:        shareable
    
  6. Once you have identified the UUID, modify the VDI file to be shareable between the nodes by running a command similar to the following, except use the UUID you identified:
     
    VboxManage modifyhd 702d44e7-c234-421f-880b-335da09d8414 -type shareable
    
  7. Once the VDI file has been set up for sharing using the VboxMange utility, create two additional VMs by repeating the same steps above, except you do not need to create the shared VDI file again nor do you need to modify the shared storage via the command line. When creating the two VMs, add the shared storage by selecting the Add Hard Disk option as before. Then select Choose Existing Disk and select the shared VDI file you created before.

     Creating the other VMs

    Figure 5. Creating the other VMs.

    At this point you should have three VMs within Oracle VM VirtualBox, all with a private network adapter configured and all with access to the shared storage.

  8. Boot each VM and install Oracle Linux on each of the nodes.

Configuring OCFS2

Once the operating systems are installed on the nodes, there are a few configuration changes that need to be performed on each node to allow them to use the private network.

  1. Configure the IP addressing scheme for the private network. For a demo environment, you can use the following IP addressing scheme. In this example, interface eth1 is being used as the private network interface.
     
    Node 1 10.0.0.1 Node 2 10.0.0.2 Node 3 10.0.0.3
    
  2. From the command line in your virtual machine instance, add a file called /etc/sysconfig/network-scripts/ifcfg-eth1, so that the first node can communicate on the private network. Here is example content to put in the file:
     
    DEVICE="eth1"
    BOOTPROTO="static"
    IPADDR="10.0.0.1"
    NM_CONTROLLED="no"
    ONBOOT="yes"
    TYPE="Ethernet"
    
  3. Repeat Step 2 for each of the other nodes, changing the IP address, as applicable for each node, according to addressing scheme from Step 1.
  4. Install the ocfs2-tools.x86_64 file on each node by typing the following command:
     
    yum install ocfs2-tools.x86_64
    
     

    In addition, you might want to update the OS at this point as well. You can do this by typing the following command:

    yum update
    
  5. For the purposes of this article, a cluster name of ocfs2demo will be used and the cluster will have three nodes configured. We will use the o2cb cluster registration utility to add the cluster and the nodes as well as to register the cluster and start the heartbeat.
     

    On each of the nodes, run the following o2cb commands:

    o2cb add-cluster ocfs2demo
    o2cb add-node --ip 10.0.0.1 --port 7777 --number 1 ocfs2demo ocfs2-1
    o2cb add-node --ip 10.0.0.2 --port 7777 --number 2 ocfs2demo ocfs2-2
    o2cb add-node --ip 10.0.0.3 --port 7777 --number 3 ocfs2demo ocfs2-3
    o2cb register-cluster ocfs2demo
    o2cb start-heartbeat ocfs2demo
    
  6. On each node, configure and start the o2cb driver, which is an interactive configuration utility with several default settings. Begin by typing the following command. You can use the default settings and, when asked to provide the "Cluster to start on boot" information, enter ocfs2demo:
     
    [root@ocfs2-1 ~]# service o2cb configure 
    Configuring the O2CB driver.
    
    This will configure the on-boot properties of the O2CB driver.
    The following questions will determine whether the driver is loaded on boot.  The current values will be shown in brackets ('[]').  Hitting <ENTER> without typing an answer will keep that current value.  Ctrl-C will abort.
    
    Load O2CB driver on boot (y/n) [y]:
    Cluster stack backing O2CB [o2cb]:
    Cluster to start on boot (Enter "none" to clear) [ocfs2demo]:
    Specify heartbeat dead threshold (>=7) [31]:
    Specify network idle timeout in ms (>=5000) [30000]:
    Specify network keepalive delay in ms (>=1000) [2000]:
    Specify network reconnect delay in ms (>=2000) [2000]:
    Writing O2CB configuration: OK
    Loading filesystem "configfs": OK
    Mounting configfs filesystem at /sys/kernel/config: OK Loading stack plugin "o2cb": OK Loading filesystem "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Setting cluster stack "o2cb": OK Registering O2CB cluster "ocfs2demo": OK Setting O2CB cluster timeouts : OK
    
     

    After registering each node, the /etc/ocfs2/cluster.conf file will look similar to the following when the configuration is complete.

    node:
         name = ocfs2-1
         cluster = ocfs2demo
         number = 1
         ip_address = 10.0.0.1
         ip_port = 7777
    
    node:
         name = ocfs2-2
         cluster = ocfs2demo
         number = 2
         ip_address = 10.0.0.2
         ip_port = 7777
    
    node:
         name = ocfs2-3
         cluster = ocfs2demo
         number = 3
         ip_address = 10.0.0.3
         ip_port = 7777
    
    cluster:
         name = ocfs2demo
         heartbeat_mode = local
         node_count = 3
    
  7. To format and prepare the shared storage, which needs to be done only once from any node in the cluster, run the following command to create an OCFS2 file system.
     

    Note: Even though we are defining a three-node cluster in this example, the following command specifies that four node slots be added, which reserves one slot for future expansion. Node slots can be increased at any time, but they can't be removed once they are created. Node slots also consume disk space. For more information about choosing the appropriate number of node slots, see the "OCFS2 Best Practices Guide" (a link is provided in the "See Also" section).

    mkfs.ocfs2 -L ocfs2demo --cluster-name=ocfs2demo --fs-feature-level=max-features -N 4
  8. Add the following line to the /etc/fstab file on each node in the cluster:
     
    /dev/sdb1               /ocfs2demo              ocfs2   _netdev        0 0
  9. On each node in the cluster, mount the changes from the /etc/fstab file by running the following command:
     
    mount -a
    

At this point, if the OCFS2 configuration is correct, the file system will mount and you will see the following output if you run df -h on each node in the cluster.

[user@ocfs2-1 ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_ocfs21-lv_root
                       26G  2.5G   22G  11% /
tmpfs                1004M     0 1004M   0% /dev/shm
/dev/sda1             485M   98M  362M  22% /boot
/dev/sdb1              12G  1.3G   11G  11% /ocfs2demo

 

You can also create a test file on the shared storage, and you will be able to see this test file on each node in the cluster.

I hope you found this article useful.

See Also

About the Author

Robert Chase is a member of the Oracle Linux product management team. He has been involved with Linux and open source software since 1996. He has worked with systems as small as embedded devices and with large supercomputer-class hardware.

 

출철 : http://www.oracle.com/technetwork/articles/servers-storage-admin/linux-ocfs2-vm-2191874.html

번호 제목 글쓴이 날짜 조회 수 추천 수
18 Oracle Database certification on Microsoft Windows 10 명품관 2016.01.08 3815 0
17 Time-out and Thanks by Tom Kyte (Tom Kyte 의 휴식) 명품관 2016.01.08 21704 0
16 Oracle PL/SQL to Excel xlsx API ORA_EXCEL 명품관 2016.01.07 3797 0
15 Free webinar - Advanced Oracle Database Administration in Action with Uwe Hesse 명품관 2016.01.06 739 0
14 리눅스 시스템 해킹, 초딩도 할 수 있는 수준이라고? 명품관 2015.12.21 645 0
13 VirtualBox 5.0.12 released 명품관 2015.12.21 727 0
12 Oracle Enterprise Manager Cloud Control 13c Release 1 (13.1.0.0) 명품관 2015.12.21 676 0
11 MOS Note:884522.1 - New preupgrd.sql available 명품관 2015.12.09 838 0
10 Oracle Announces Beta Availability of Oracle Database 12c Release 2 명품관 2015.12.07 1253 0
9 Watch featured OTN Virtual Technology Summit Replay Sessions - Nov 30, 2015 명품관 2015.12.01 38028 0
8 Announcing the general availability of Oracle Linux 7.2 명품관 2015.11.30 189526 0
7 Multitenant Database Management 명품관 2015.11.24 878 0
6 Advanced Usage of the AWR Warehouse 명품관 2015.11.24 1323 0
5 How to Set Up DTrace to Detect PHP Scripting Problems on Oracle Linux 명품관 2015.11.24 3603 0
4 How to Configure x86 Memory Performance for Large Databases Using Linux HugePages 명품관 2015.11.24 1789 0
» How to Test OCFS2 in a Virtual Environment 명품관 2015.11.24 2221 0
2 The Modern Command Line (SQL, PL/SQL) 명품관 2015.11.24 940 0
1 VirtualBox 5.0.10 released! November 10th, 2015 ecrossoug 2015.11.13 1261 0
위로