Skip to main content

A.3.2 Example of the Physical Partition Configuration Procedure


A.3.2 Example of the Physical Partition Configuration Procedure
  1. Log in to the master XSCF.
    Execute the showbbstatus command to check that the XSCF to which you have logged in is the master XSCF.
    If you have logged in to a standby XSCF, log out and then log in to the master XSCF again.
XSCF> showbbstatus
BB#00 (Master)
  1. Set mirror mode for memory.
  1. a. Execute the showfru command to check mirror mode of the memory on the SPARC M10-4S system board (PSB<BB>).

    The following example displays the setting information for devices under the physical system board (PSB 00-0).
XSCF> showfru sb 00-0
Device Location Memory Mirror Mode
sb 00-0
cpu 00-0-0 no
cpu 00-0-1 no
cpu 00-0-2 no
cpu 00-0-3 no
  1. b. To use the memory mirror mode, execute the setupfru command to set that mode.

    If you are not using memory mirror mode, this step is not necessary.

    For details on memory mirror mode, see "14.1 Configuring Memory Mirroring" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide.

    The following example places all the CPUs under PSB 00-0 in memory mirror mode.
XSCF> setupfru -m y sb 00-0
  1. Execute the showfru command to check the setting of memory mirror mode.
XSCF> showfru sb 00-0
Device Location Memory Mirror Mode
sb 00-0
cpu 00-0-0 yes
cpu 00-0-1 yes
cpu 00-0-2 yes
cpu 00-0-3 yes
  1. Create the physical partition configuration information.
  1. a. Execute the showpcl command to check the physical partition configuration information.
XSCF> showpcl -p 0
PPAR-ID LSB PSB Status
  1. b. Execute the setpcl command to register the system board in the physical partition configuration information.

    Execute the setpcl command to register the SPARC M10-4S system board in the physical partition configuration information for the built-in destination.

    In the following example, physical system boards (PSBs) 00-0 and 01-0 are mapped to logical system boards (LSBs) 00 and 01 of physical partition 0.

XSCF> setpcl -p 0 -a 00=00-0 01=01-0
  1. c. Execute the showpcl command to check the physical partition configuration information.

    Check the physical partition configuration information.
XSCF> showpcl -p 0
PPAR-ID LSB PSB Status
00 Running
00 00-0
01 01-0
  1. Specify the -v option to display details on the configuration policy, IO nullification option (No-IO), and memory nullification option (No-Mem).
XSCF> showpcl -v -p 0
PPAR-ID LSB PSB Status No-Mem No-IO Cfg-policy
00 Running
System
00 00-0 False False
01 01-0 False False
  1. Use the setpcl command to change the setting of the configuration policy, IO nullification option (No-IO), and memory nullification option (No-Mem).

    For details on the setpcl command, see the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 XSCF Reference Manual.
  1. Assign a system board (PSB<BB>) to a physical partition.
  1. a. Execute the showboards-a command to check the PSB status.

    Execute the showboards -a command to check that each PSB status is "SP" (system board pool).
XSCF> showboards -a
PSB PPAR-ID(LSB) Assignment Pwr Conn Conf Test Fault
---- ------------ ----------- ---- ---- ---- ------- --------

00-0 SP Available n n n Passed Normal
01-0 SP Available n n n Passed Normal
  1. b. Execute the addboard -c assign command to assign the PSBs.
XSCF> addboard -c assign -p 0 00-0 01-0
  1. c. Execute the showboards-p command to check the PSB status.

    Execute the showboards -p command to check the status of each PSB assigned to the physical partition.

    This example checks that the [Assignment] field of each PSB becomes "Assigned" since each PSB has been normally assigned to physical partition 0.
XSCF> showboards -p 0
PSB PPAR-ID(LSB) Assignment Pwr Conn Conf Test Fault
---- ------------ ----------- ---- ---- ---- ------- --------

00-0 00(00) Assigned n n n Passed Normal
01-0 00(01) Assigned n n n Passed Normal
  1. Register the CPU Activation key to assign CPU core resources.
  1. a. Execute the showcodactivation command to check the information on the CPU Activation key.

    Execute the showcodactivation command to check whether the physical partition contains an assignable CPU Activation key.

    If only the header is displayed, the CPU Activation key is not registered in the XSCF.
XSCF> showcodactivation
Index Description Count
------- ----------- ------
Note - If the number of registered CPU Activations is not enough for the number of CPUs to be used, purchase CPU Activations and add the CPU Activation keys.
  1. b. Execute the addcodactivation command to register the CPU Activation key.

    For details on how to register a CPU Activation key, see "5.3 Adding CPU Core Resources" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide.
XSCF> addcodactivation "Product: SPARC M10-4S
SequenceNumber:10005
Cpu: noExpiration 2
Text-Signature-SHA256-RSA2048:
PSSrElBrse/r69AVSVFd38sT6AZm2bxeUDdPQHKbtxgvZPsrtYguqiNUieB+mTDC
:
:
b1GCkFx1RH27FdVHiB2H0A=="
AboveKeywillbeadded,Continue?[y|n]:y
  1. c. Execute the showcodactivation command to check the information on the CPU Activation keys.

    Execute the showcodactivation command to check whether the physical partition contains an assignable CPU Activation key.
XSCF> showcodactivation
Index Description Count
------- ----------- ------

0 PROC 2
1 PROC 2
2 PROC 2
3 PROC 2

--- Omitted ---


30 PROC 2
31 PROC 2
  1. d. Execute the setcod command to assign CPU core resources.

    Execute the setcod command to assign the CPU core resources to the physical partition.

    The following example assigns 64 CPU core resources to physical partition 0.
XSCF> setcod -p 0 -s cpu -c set 64
PROC Permits assigned for PPAR 0 : 0 -> 64

PROC Permits assigned for PPAR will be changed.
Continue? [y|n] :y

Completed.
Note - XSCF firmware of versions XCP 2250 and earlier do not support the -c add, -c delete, and -c set options. Specify the options of the setcod command as shown below to interactively add and delete CPU core resources.
XSCF> setcod -s cpu
  1. Execute the showcod command to check information on the CPU core resources assigned to the physical partition.

    The following example confirms that 64 CPU core resources have been assigned to physical partition 0 with the setcod command that was just executed.
XSCF> showcod -p 0
PROC Permits assigned for PPAR 0: 64
  1. Execute the resetdateoffset command to reset the difference with the time managed by the XSCF.
    Execute the resetdateoffset command to reset the difference between the time managed by the XSCF and the time managed by the physical partitions.
XSCF> resetdateoffset -p 0
  1. Execute the showpparmode command to check the setting of the detail level of the diagnosis messages and that of the PPAR DR mode.
    Execute the showpparmode command to check that the detail level (Message Level) of the diagnosis message is "normal" (standard) and that Next of the PPAR DR mode is set to "on" (enabled).
XSCF> showpparmode -p 0
Host-ID                  :9007002b
Diagnostic Level         :min
Message Level            :normal
Alive Check              :on
Watchdog Reaction        :reset
Break Signal             :on
Autoboot(Guest Domain)   :on
Elastic Mode             :off
IOreconfigure            :false
PPAR DR(Current)         :-
PPAR DR(Next)            :on
  1. If the detail level of the diagnosis message is other than "normal", execute the setpparmode command to set it to "normal".
    For details on the setpparmode command, see the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 XSCF Reference Manual.
XSCF> setpparmode -p 0 -m message=normal
  1. If PPAR DR mode is set to "off" (disabled), execute the setpparmode command to set it to "on".
XSCF> setpparmode -p 0 -m ppar_dr=on
  1. Execute the poweron command to power on the physical partitions.
XSCF> poweron -p 0
  1. Execute the console command to connect the console to the physical partition.
XSCF> console -p 0
  1. Install Oracle Solaris and Oracle VM Server for SPARC.
    Install, on the control domain, Oracle Solaris and Oracle VM Server for SPARC.
  1. For details on the versions and conditions of Oracle Solaris required for physical partition dynamic reconfiguration, see Table 1-3 and Table 1-4.
  1. For details on the installation, see the following documents, presented on the Oracle Corporation web site (https://docs.oracle.com/en/).
    - Oracle Solaris 10

    Oracle Solaris 10 1/13 Installation Guide

    - Oracle Solaris 11

    Installing Oracle Solaris 11.x Systems

    - Oracle VM Server for SPARC

    "Installing and Enabling Software" in the Oracle VM Server for SPARC Administration Guide
  1. Configure a logical domain.
    This item describes how to configure the logical domains defined in "Figure A-3 Example of the 2BB Configuration for Operational Continuity (With Free Resources)."
  1. a. Reduce the number of control domain resources.

    For the factory-default configuration, all the CPU cores, memory, and the PCIe root complexes are assigned to the control domain (primary). To allow these resources to be assigned to other logical domains, first delete some of the resources from the control domain and configure the console service.
  1. The following shows command execution examples.

    Execute the ldm start-reconf command to switch to delayed reconfiguration mode.
# ldm start-reconf primary
Initiating a delayed reconfiguration operation on the primary domain.
All configuration changes for other domains are disabled until the primary domain reboots, at which time the new configuration for the primary domain will also take effect.
  1. Remove the root complex with the ldm remove-io command.

    The following example partially describes the command for removing PCIE1, PCIE2, PCIE3, PCIE5, PCIE6, PCIE7, PCIE9, PCIE10, PCIE11, PCIE13, PCIE14, and PCIE15 according to the configuration example.
# ldm remove-io PCIE1 primary
------------------------------------------------------------------------------

Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

....
# ldm remove-io PCIE15 primary
------------------------------------------------------------------------------

Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------
  1. Reduce the number of CPU cores and the size of memory assigned to the control domain by specifying a size smaller than the original size with the ldm set-core and ldm set-memory commands.

    The following gives an example of setting the number of CPU cores of the control domain to 8 and the memory size to 14 GB, according to the configuration example.
# ldm set-core 8 primary
------------------------------------------------------------------------------

Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

# ldm set-memory 14G primary
------------------------------------------------------------------------------

Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------
Note - We recommend that you first configure the CPU cores with the ldm set-core command and then the memory with the ldm set-memory command. This facilitates the assignment of a continuous area of collective memory. This also makes it easy to easily satisfy the placement conditions for the CPU cores for which physical partition dynamic reconfiguration is possible and the memory placement conditions. For details on the CPU core and memory placement conditions, see "Placement of CPU cores and memory" in "2.5.2 Considerations in System Operation for Dynamic Reconfiguration."
  1. Create a service of the virtual console terminal concentrator, called vcc0, with the ldm add-vconscon command and then start the daemon of the virtual network terminal server (vntsd) with the svcadm command. Configure a console connection to each logical domain via this vcc0.
# ldm add-vconscon port-range=5000-5200 vcc0 primary
------------------------------------------------------------------------------

Notice: The primary domain is in the process of a delayed reconfiguration.
Any changes made to the primary domain will only take effect after it reboots.
------------------------------------------------------------------------------

# svcadm enable vntsd
  1. Save the configuration information and then restart Oracle Solaris.

    The following example checks the configuration information saved with the ldm list-spconfig command, and then saves the configuration as name ldm-set1 with the ldm add-spconfig command. Then, it again checks that the configuration has been saved with the ldm list-spconfig command and finally gives an example of restarting Oracle Solaris.
# ldm list-spconfig
factory-default [current]
# ldm add-spconfig ldm-set1
# ldm list-spconfig
factory-default
ldm-set1 [current]
# shutdown -i6 -g0 -y
  1. b. Establish a redundant configuration for the system volume of the control domain.

    Establish a redundant configuration for the system volume of the control domain. This item describes an example of the commands for establishing a mirroring configuration with ZFS. For details, see the following documents presented on the Oracle Corporation web site (https://docs.oracle.com/en/).

    - For Oracle Solaris 10

    "How to Create a Mirrored Root Pool (Post Installation)" in the Oracle Solaris ZFS Administration Guide

    - For Oracle Solaris 11

    "How to Configure a Mirrored Root Pool (SPARC or x86/VTOC)" in the Oracle Solaris 11.1 Administration: ZFS File Systems
  1. To use other redundant configuration software, see the manual for that software.
  1. Execute the zpool status command to check the status of the root pool.

    The following example indicates that c2t50000393E802CCE2d0s0 has been assigned to the default root pool (rpool).
# zpool status rpool
  pool: rpool
state: ONLINE
  scan: none requested
config:
        NAME                       STATE     READ WRITE CKSUM
        rpool                      ONLINE       0     0     0
          c2t50000393E802CCE2d0s0  ONLINE       0     0     0
errors: No known data errors
  1. Execute the format command to check which disks can be added.

    The following example indicates that c3t50000393A803B13Ed0 exists as the other disk.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c2t50000393E802CCE2d0 <TOSHIBA-MBF2300RC-3706 cyl 46873 alt 2 hd 20 sec 625>
/pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000393e802cce2,0
/dev/chassis/FUJITSU-BBEXP.500000e0e06d027f/0123_HDD00/disk
1. c3t50000393A803B13Ed0 <TOSHIBA-MBF2300RC-3706 cyl 46873 alt 2 hd 20 sec 625>
/pci@8800/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000393a803b13e,0
/dev/chassis/FUJITSU-BBEXP.500000e0e06d243f/022U_HDD01/disk
Specify disk (enter its number): ^C
  1. Execute the zpool attach command to add the second disk to the rpool to provide a mirror configuration.

    The following example adds c3t50000393A803B13Ed0s0 with the zpool attach command and then checks the status of the synchronization processing (resilver) with the zpool status command. By referring to status and action, you can determine that the synchronization processing is in progress. Regularly execute the zpool status command to check the status of the synchronization processing until the processing ends.
# zpool attach rpool c2t50000393E802CCE2d0s0 c3t50000393A803B13Ed0s0
# zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function in a degraded state.
action: Wait for the resilver to complete.
Run 'zpool status -v' to see device specific details.
scan: resilver in progress since Wed Jan 29 21:35:39 2014
3.93G scanned out of 70.6G at 71.9M/s, 0h15m to go
3.90G resilvered, 5.56% done
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c2t50000393E802CCE2d0s0 ONLINE 0 0 0
c3t50000393A803B13Ed0s0 DEGRADED 0 0 0 (resilvering)
  1. Upon the completion of the synchronization processing, [state] is set to "ONLINE", as shown below.
# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 70.6G in 0h10m with 0 errors on Wed Jan 29 21:45:42 2014
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2t50000393E802CCE2d0s0 ONLINE 0 0 0
c3t50000393A803B13Ed0s0 ONLINE 0 0 0
errors: No known data errors
  1. c. Configure a root domain.

    This item describes the procedure for configuring a root domain.
  1. Execute the ldm add-domain command to add a logical domain named root-dom0.
# ldm add-domain root-dom0
  1. Execute the ldm set-variable command to change the OpenBoot PROM environment variable "auto-boot?", which is designed to automatically boot Oracle Solaris, to "false" (disabled). By default, this setting is "true" (enabled). So, OpenBoot PROM tries to start Oracle Solaris automatically when Oracle Solaris is not installed. Changing this setting to disabled facilitates the work to be performed before Oracle Solaris installation.
# ldm set-variable auto-boot\?=false root-dom0
  1. First, assign the CPU cores with the ldm set-core command and then assign the memory with the ldm set-memory command.

    The following example assigns 12 CPU cores with the ldm set-core command and 24 GB of memory with the ldm set-memory command, according to the configuration example.
# ldm set-core 12 root-dom0
# ldm set-memory 24G root-dom0
Note - We recommend that you first configure the CPU cores with the ldm set-core command and then the memory with the ldm set-memory command. This facilitates the assignment of a continuous area of collective memory. This also makes it easy to easily satisfy the placement conditions for the CPU cores for which physical partition dynamic reconfiguration is possible and the memory placement conditions. For details on the CPU core and memory placement conditions, see "Placement of CPU cores and memory" in "2.5.2 Considerations in System Operation for Dynamic Reconfiguration."
  1. Execute the ldm set-vconsole command to assign the virtual console (vcons).

    The following example executes the ldm set-vconsole command to assign port number 5000 of the service (vcc0) of the virtual console terminal concentrator in the control domain to the virtual console.
# ldm set-vconsole service=vcc0 port=5000 root-dom0
  1. The following example executes the ldm list-io -l command to display the PCI assignment status.

    The line where NAME begins with "/BB0" and the [TYPE] column has "PCIE" means the PCle endpoint on the SPARC M10-4S (BB0). The line where the [DOMAIN] column is empty indicates an unassigned PCIe endpoint and the related root complex is displayed in the [BUS] column.

    Therefore, you can quickly understand that PCIE1, PCIE2, PCIE3, PCIE5, PCIE6, and PCIE7 are unassigned root complexes on BB0.
# ldm list-io -l
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------

(Omitted)
/BB0/CMUL/NET0 PCIE PCIE0 primary OCC
[pci@8000/pci@4/pci@0/pci@9]
network@0
network@0,1
/BB0/CMUL/SASHBA PCIE PCIE0 primary OCC
[pci@8000/pci@4/pci@0/pci@0]
scsi@0/iport@f/disk@w50000393e802cce2,0
scsi@0/iport@f/disk@w50000393d8285226,0
scsi@0/iport@f/smp@w500000e0e06d027f
scsi@0/iport@f/enclosure@w500000e0e06d027d,0
scsi@0/iport@v0
/BB0/PCI0 PCIE PCIE1 EMP
[pci@8100/pci@4/pci@0/pci@0]
/BB0/PCI3 PCIE PCIE2 EMP
[pci@8200/pci@4/pci@0/pci@0]
/BB0/PCI4 PCIE PCIE2 EMP
[pci@8200/pci@4/pci@0/pci@8]
/BB0/PCI7 PCIE PCIE3 EMP
[pci@8300/pci@4/pci@0/pci@0]
/BB0/PCI8 PCIE PCIE3 EMP
[pci@8300/pci@4/pci@0/pci@8]
/BB0/CMUL/NET2 PCIE PCIE4 primary OCC
[pci@8400/pci@4/pci@0/pci@a]
network@0
network@0,1
/BB0/PCI1 PCIE PCIE5 EMP
[pci@8500/pci@4/pci@0/pci@8]
/BB0/PCI2 PCIE PCIE5 EMP
[pci@8500/pci@4/pci@0/pci@9]
/BB0/PCI5 PCIE PCIE6 EMP
[pci@8600/pci@4/pci@0/pci@9]
/BB0/PCI6 PCIE PCIE6 EMP
[pci@8600/pci@4/pci@0/pci@11]
/BB0/PCI9 PCIE PCIE7 EMP
[pci@8700/pci@4/pci@0/pci@9]
/BB0/PCI10 PCIE PCIE7 EMP
[pci@8700/pci@4/pci@0/pci@11]
(Omitted)
  1. See the device path (string displayed as [pci@....]) displayed in the above result and "A.6 SPARC M10-4S Device Paths" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide to determine which root complexes are to be assigned to the root domain.
  1. In the configuration example, all the unassigned root complexes (PCIE1, PCIE2, PCIE3, PCIE5, PCIE6, and PCIE7) on BB0 checked above are to be assigned. So, execute the ldm add-io command to assign them to root-dom0.

    The following example shows command execution.
# ldm add-io PCIE1 root-dom0
# ldm add-io PCIE2 root-dom0
# ldm add-io PCIE3 root-dom0
# ldm add-io PCIE5 root-dom0
# ldm add-io PCIE6 root-dom0
# ldm add-io PCIE7 root-dom0
  1. Place the root domain in the bound status with the ldm bind-domain command, and then execute the ldm list-io command to check that the root complexes have been assigned.

    The following example checks that root-dom0 is bound with the ldm bind-domain command to check with the ldm list-io command that the root complexes have been assigned.

    The line where the [TYPE] column is "BUS" and the [DOMAIN] column is "root-dom0" indicates the root complex assigned to root-dom0. BUS in that line is the name of the assigned root complex.

    In the following example, you can check that PCIE1, PCIE2, PCIE3, PCIE5, PCIE6, and PCIE7 have been assigned to root-dom0.
# ldm bind-domain root-dom0
# ldm list-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------

PCIE0 BUS PCIE0 primary IOV
PCIE1 BUS PCIE1 root-dom0 IOV
PCIE2 BUS PCIE2 root-dom0 IOV
PCIE3 BUS PCIE3 root-dom0 IOV
PCIE4 BUS PCIE4 primary IOV
PCIE5 BUS PCIE5 root-dom0 IOV
PCIE6 BUS PCIE6 root-dom0 IOV
PCIE7 BUS PCIE7 root-dom0 IOV
PCIE8 BUS PCIE8 primary IOV
(Omitted)
  1. According to step 11. c, configure the root domains (root-dom1 in the example) of the other SPARC M10-4Ss.
  1. d. Install Oracle Solaris in the root domain.

    This item describes the procedure for installing Oracle Solaris in the root domain.
  1. Execute the ldm start-domain command to start root domain root-dom0.
# ldm start-domain root-dom0
LDom root-dom0 started
  1. Execute the ldm set-variable command to change the OpenBoot PROM environment variable "auto-boot?", which is designed to automatically boot Oracle Solaris, to "true" (enabled). After Oracle Solaris is installed, execution of the ldm start-domain command also starts Oracle Solaris.
# ldm set-variable auto-boot\?=true root-dom0
  1. Execute the telnet command to connect to the console of the root domain.

    The following example checks that the port number of root-dom0 is 5000 by executing the ldm list-domain command. It can also check that root-dom0 is stopped in the OpenBoot PROM (OBP) status by connecting to localhost port number 5000 with the telnet command.
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.0% 7h 7m
root-dom0 active -t---- 5000 24 24G 0.0% 20s
root-dom1 bound ------ 5001 24 24G
# telnet localhost 5000
....
{0} ok
  1. Install Oracle Solaris in the root domain.

    For details on the versions and conditions of Oracle Solaris required for physical partition dynamic reconfiguration, see Table 1-3 and Table 1-4.
  1. The following example executes the command to start Oracle Solaris 11 installation through the network.
{0} ok boot net:dhcp
....
  1. For details on the installation, see the following documents, presented on the Oracle Corporation web site (https://docs.oracle.com/en/).

    - Oracle Solaris 10

    Oracle Solaris 10 1/13 Installation Guide

    - Oracle Solaris 11

    Installing Oracle Solaris 11.x Systems
  1. According to step 11. d, install the other root domains (root-dom1 in the example) in the same way.
  1. e. Assign the virtual I/O service to the root domain.

    This item describes an example of the procedure for setting the virtual I/O service to the root domain.

    The example shows that the entire physical disk of the root domain is lent as the virtual disk service (vds).

    This example shows that the Ethernet card port is assigned to the virtual switch network device (vsw).

    For details on the virtual I/O service, see the Oracle VM Server for SPARC Administration Guide published by Oracle Corporation.
  1. Log in to the root domain with the root privilege.
root-dom0 console login: root
Password:
....
  1. To specify the back-end device (physical disk) to be assigned to the virtual disk service (vds), execute the format command to display the disk and its disk path.

    A string in the format of cXtXXXXXXXXXXXXXXXXdX indicates the disk, and a string beginning with "/pci@" indicates the device path.
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c2t50000393A802CCE2d0 <TOSHIBA-MBF2300RC-3706 cyl 46873 alt 2 hd 20 sec 625>
/pci@8100/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000393e802cce2,0
1. c3t50000393D8285226d0 <TOSHIBA-MBF2300RC-3706 cyl 46873 alt 2 hd 20 sec 625>
/pci@8200/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000393d8285226,0
2. c4t50000393A804B13Ed0 <TOSHIBA-MBF2300RC-3706 cyl 46873 alt 2 hd 20 sec 625>
/pci@3200/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000393a803b13e,0
....
  1. See the device path displayed in the above result and "A.6 SPARC M10-4S Device Paths" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide, and check the physical location of the disk. Then, determine the back-end disk (cXtXXXXXXXXXXXXXXXXdX) to be assigned to the virtual disk service.

    The configuration example assigns two disks (c3t50000393D8285226d0 and c4t50000393A804B13Ed0) as back-ends of the virtual disks of the guest domains (guest0, guest1).
  1. Next, to specify the network interface to be assigned to the virtual switch service (vsw), execute the dladm show-phys command to display the network interface name (LINK) associated with the location (LOC) of the PCIe slot.

    From the displayed results, check the location of the network interface and determine the network interface to be assigned to the virtual switch service.

    The configuration example assigns two network interfaces (net1 and net2) to the virtual switch associated with the virtual network of each guest domain.
# dladm show-phys -L
LINK        DEVICE      LOC
net0        igb0        BB#0-PCI#0
net1        igb1        BB#0-PCI#1
net2        igb2        BB#0-PCI#2
net3        igb3        BB#0-PCI#3
...
  1. Execute the ldm add-vdiskserver command to add the virtual disk service to the root domain.

    The following example adds the virtual disk service (vds0) to the root domain (root-dom0) with the ldm add-vdiskserver command.
# ldm add-vdiskserver vds0 root-dom0
  1. Execute the ldm add-vdiskserverdevice command to export the back-end of the virtual disk from the root domain.

    The following example executes the ldm add-vdiskserverdevice command to assign two back-end disks to vds0 to export them.
# ldm add-vdiskserver vds0 root-dom0
# ldm add-vdiskserverdevice /dev/dsk/c3t50000393D8285226d0s2 vol0@vds0
# ldm add-vdiskserverdevice /dev/dsk/c4t50000393A804B13Ed0s2 vol1@vds0
  1. Execute the ldm add-vswitch command to add the virtual switch to the root domain.

    The following example adds vsw0 and vsw1 to add a virtual switch for each guest domain and assigns a different physical network interface (net1, net2) to each.
# ldm add-vswitch net-dev=net1 vsw0 root-dom0
# ldm add-vswitch net-dev=net2 vsw1 root-dom0
  1. Similarly, add the virtual I/O service to the root domain (root-dom1) to which the I/O of SPARC M10-4S on the BB1 side was assigned.

    As an example, assign the following virtual disk service name and virtual switch names to root-dom1. For details on the step, see 11. e.

    Virtual disk service name: vds1 (assign vol10 and vol11 as two back-end volume names.)

    Virtual switch name: vsw10, vsw11
  1. f. Configure a guest domain.

    This item describes the procedure for configuring a guest domain.
  1. Execute the ldm add-domain command to add the logical domain named guest0.
# ldm add-domain guest0
  1. Execute the ldm set-variable command to change the OpenBoot PROM environment variable "auto-boot?", which is designed to automatically boot Oracle Solaris, to "false" (disabled). By default, this setting is "true" (enabled). So, OpenBoot PROM tries to start Oracle Solaris automatically when Oracle Solaris is not installed. Changing this setting to disabled facilitates the work to be performed before Oracle Solaris installation.
# ldm set-variable auto-boot\?=false guest0
  1. First, assign the CPU cores with the ldm set-core command and then assign the memory with the ldm set-memory command.

    The following example assigns 16 CPU cores with the ldm set-core command and 32 GB of memory with the ldm set-memory command, according to the configuration example.
# ldm set-core 16 root-dom0
# ldm set-memory 32G root-dom0
Note - We recommend that you first configure the CPU cores with the ldm set-core command and then the memory with the ldm set-memory command. This facilitates the assignment of a continuous area of collective memory. This also makes it easy to easily satisfy the placement conditions for the CPU cores for which physical partition dynamic reconfiguration is possible and the memory placement conditions. For details on the CPU core and memory placement conditions, see "Placement of CPU cores and memory" in "2.5.2 Considerations in System Operation for Dynamic Reconfiguration."
  1. Execute the ldm set-vconsole command to assign the virtual console (vcons).

    The following example executes the ldm set-vconsole command to assign port number 5100 of the service (vcc0) of the virtual console terminal concentrator in the control domain to the virtual console.
# ldm set-vconsole service=vcc0 port=5100 guest0
  1. Execute the ldm add-vdisk command to assign the virtual disk (vdisk).

    The following example assigns two virtual disks (vdisk0, vdisk10) according to the configuration example. The back-end of each virtual disk specifies the back-ends (vds0 vol0, vds1 vol10) added to two root domains (root-dom0, root-dom1).
# ldm add-vdisk vdisk0 vol0@vds0 guest0
# ldm add-vdisk vdisk10 vol10@vds1 guest0
  1. Execute the ldm add-vnet command to assign the virtual network device (vnet).

    The following example assigns two virtual network devices (vnet0, vnet10) according to the configuration example. The virtual switch connected to each virtual network device specifies virtual switches (vsw0, vsw10) added to two root domains (root-dom0, root-dom1).
# ldm add-vnet vnet0 vsw0 guest0
# ldm add-vnet vnet10 vsw10 guest0
  1. Similarly, configure the other guest domain (in this example, guest1) as described in step 11. f.

    The following devices are to be assigned based on the configuration example:

    Virtual disk: vdisk1 (vol1@vds0 for the back-end), vdisk11 (vol11@vds1 for the back-end)

    Virtual network: vnet1 (vsw1 for virtual switch), vnet11 (vsw11 for virtual switch)
Note - If multiple virtual disks or virtual network devices are assigned, record the value of the IDs assigned to the virtual device, from the ldm list -l command execution result. If a virtual device is added again and the ID is specified after being dynamically deleted, the ID is required so that the virtual device path remains unchanged.
  1. g. Install Oracle Solaris in the guest domain.

    This item describes the procedure for installing Oracle Solaris in the guest domain.
  1. Execute the ldm bind-domain command to bind the guest domain and then start it by executing the start-domain command.

    The following example shows the execution of the commands for starting guest0.
# ldm bind-domain guest0
# ldm start-domain guest0
LDom guest0 started
  1. Execute the ldm set-variable command to change the OpenBoot PROM environment variable "auto-boot?", which is designed to automatically boot Oracle Solaris, to "true" (enabled). After Oracle Solaris is installed, execution of the ldm start-domain command also starts Oracle Solaris.
# ldm set-variable auto-boot\?=true guest0
  1. Execute the telnet command to connect to the console of the guest domain.

    The following example checks that the port number of the guest0 console is "5100" by executing the ldm list-domain command. It can also check that guest0 is stopped in the OpenBoot PROM status by connecting to localhost port number "5100" with the telnet command.
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.0% 8h 7m
guest0 active -t---- 5100 32 32G 0.0% 20s
root-dom0 active -n--v- 5000 24 24G 0.0% 43s
root-dom1 active -n--v- 5001 24 24G 0.0% 20s
guest1 inactive ------ 32 32G
# telnet localhost 5100
....
{0} ok
  1. For details on the versions and conditions of Oracle Solaris required for physical partition dynamic reconfiguration, see Table 1-3 and Table 1-4.

    The following example executes the command to start Oracle Solaris 11 installation through the network.
{0} ok boot net:dhcp
....
  1. For details on the installation, see the following documents, presented on the Oracle Corporation web site (https://docs.oracle.com/en/).

    - Oracle Solaris 10

    Oracle Solaris 10 1/13 Installation Guide

    - Oracle Solaris 11

    Installing Oracle Solaris 11.x Systems
  1. According to step 11. g, install the other guest domains (guest1 in the example) in the same way.
  1. h. Establish a redundant configuration for the virtual I/Os of a guest domain.

    The following describes an example of the procedure for establishing a redundant configuration for two virtual network interfaces (vnets) assigned to guest domain guest0, using IPMP. For details on the procedures for other redundant configurations, see the documentation about the software for the respective redundant configurations.
  1. Log in to guest domain guest0.

    In the example below, the ldm list-domain command is used to check the port number of the console of guest0, and the telnet command is used to connect to port number "5100."
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.0% 8h 7m
guest0 active -n---- 5100 32 32G 0.0% 20s
guest1 active -n---- 5101 32 32G 0.0% 19s
root-dom0 active -n--v- 5000 24 24G 0.0% 43s
root-dom1 active -n--v- 5001 24 24G 0.0% 20s
# telnet localhost 5100
....
guest0 console login: root
Password:
...
guest0#
  1. Execute the dladm command to check that the virtual network devices are visible.

    In the example below, it is possible to refer to virtual network devices as network interfaces net0 and net1.
guest0# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet up 0 unknown vnet0
net1 Ethernet up 0 unknown vnet1
  1. Execute the ipadm show-if command to check that net0 and net1 are not displayed.
guest0# ipadm show-if
IFNAME     CLASS    STATE    ACTIVE OVER
lo0        loopback ok       yes    --
  1. Execute the ipadm create-ip command to create IP interfaces net0 and net1, and then use the ipadm show-if command to check that they have been created normally.
guest0# ipadm create-ip net0
guest0# ipadm create-ip net1
guest0# ipadm show-if
IFNAME     CLASS    STATE    ACTIVE OVER
lo0        loopback ok       yes    --
net0       ip       down     no     --
net1       ip       down     no     --
  1. Execute the ipadm create-ipmp command to create IPMP interface ipmp0, and then execute the ipadm add-ipmp command to add IP interfaces net0 and net4 to the IPMP group.
guest0# ipadm create-ipmp ipmp0
guest0# ipadm add-ipmp -i net0 -i net1 ipmp0
  1. Execute the ipadm create-addr command to assign an IP address to IPMP interface ipmp0, and then use the ipadm show-addr command to check the setting. In the example below, a fixed IP address is assigned.
guest0# ipadm create-addr -T static -a local=xx.xx.xx.xx/24 ipmp0/v4
guest0# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
ipmp0/v4 static ok xx.xx.xx.xx/24
lo0/v6 static ok ::1/128
  1. Execute the ipadm set-ifprop command to set a standby interface, and use the ipmpstat -i command to check the IPMP configuration.
guest0# ipadm set-ifprop -p standby=on -m ip net1
guest0# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net1 no ipmp0 is----- up disabled ok
net0 yes ipmp0 --mbM-- up disabled ok
  1. Perform the same procedure for the other guest domains (guest1 in the example).
  1. i. Check the memory resource assignment status, and group the distributed memory blocks.

    First, see steps 11. h and 11. i in "A.2.2 Example of the Physical Partition Configuration Procedure," and group the distributed memory blocks.

    The following example shows the result of executing the ldm list-devices command with memory distribution based on the configuration example.

    You can confirm that there is only one memory block per logical domain and that there are not many distributed memory blocks of 256 to 512 MB.
# ldm list-devices -a memory
MEMORY
    PA                   SIZE            BOUND
    0x700000000000       24G             root-dom1
    0x700600000000       8G
    0x720000000000       32G             guest0
    0x740000000000       32G             guest1
    0x760000800000       1272M           _sys_
    0x760050000000       24G             root-dom0
    0x760650000000       6912M
    0x780000000000       32G
    0x7a0000000000       32G
    0x7c0000000000       32G
    0x7e0000800000       1272M           _sys_
    0x7e0050000000       512M            _sys_
    0x7e0070000000       256M            _sys_
    0x7e0080000000       14G             primary
    0x7e0400000000       16G
  1. j. Check whether there are enough free memory blocks, and make an adjustment accordingly.

    This step describes how to check whether logical domain memory blocks can be moved to free memory blocks after releasing the SPARC M10-4Ss, after which an adjustment is made.
  1. i. Check the use status of the continuous region of the memory (memory block).

    Execute the prtdiag command to check the mapping between the memory physical addresses and SPARC M10-4Ss.
# prtdiag
(Omitted)
======================= Physical Memory Configuration ========================
Segment Table:
--------------------------------------------------------------

Base           Segment  Interleave   Bank     Contains
Address        Size     Factor       Size     Modules
--------------------------------------------------------------

0x7e0000000000 32 GB    4            8 GB     /BB0/CMUL/CMP0/MEM00A
(Omitted)
0x7c0000000000 32 GB    4            8 GB     /BB0/CMUL/CMP1/MEM10A
(Omitted)
0x7a0000000000 32 GB    4            8 GB     /BB0/CMUU/CMP0/MEM00A
(Omitted)
0x780000000000 32 GB    4            8 GB     /BB0/CMUU/CMP1/MEM10A
(Omitted)
0x760000000000 32 GB    4            8 GB     /BB1/CMUL/CMP0/MEM00A
(Omitted)
0x740000000000 32 GB    4            8 GB     /BB1/CMUL/CMP1/MEM10A
(Omitted)
0x720000000000 32 GB    4            8 GB     /BB1/CMUU/CMP0/MEM00A
(Omitted)
0x700000000000 32 GB    4            8 GB     /BB1/CMUU/CMP1/MEM10A
(Omitted)
  1. The result of this example is rearranged in ascending order of physical addresses in memory. The following table lists the correspondence between the physical addresses and the SPARC M10-4S.
Table A-9  Example of Correspondence Between Physical Addresses and the SPARC M10-4S
Base Address (Physical Address) Building Block Configuration of SPARC M10-4S
0x700000000000 and after BB1
0x720000000000 and after BB1
0x740000000000 and after BB1
0x760000000000 and after BB1
0x780000000000 and after BB0
0x7a0000000000 and after BB0
0x7c0000000000 and after BB0
0x7e0000000000 and after BB0
  1. The usage status of the memory blocks will be as follows after matching the result in step 11. i to the physical addresses confirmed with the prtdiag command.
Table A-10  Example of Memory Block Use Statuses
SPARC M10-4S Physical Address Size Logical Domain
BB1 (target for replacement) 0x700000000000
24 GB
root-dom1
0x700600000000
8 GB
Unassigned
0x720000000000
32 GB
guest0
0x740000000000
32 GB
guest1
0x760050000000
24 GB
root-dom0
0x760650000000 6,912 MB Unassigned
BB0 0x780000000000
32 GB
Unassigned
0x7a0000000000
32 GB
Unassigned
0x7c0000000000
32 GB
Unassigned
0x7e0080000000
14 GB
primary
0x7e0400000000
16 GB
Unassigned
  1. ii. Check the size and quantity of the movement source memory blocks.

    While referring to the check results of the memory block use status, check the memory block (hereafter called "source memory block") assigned to the SPARC M10-4S to be replaced.

    In "Table A-10 Example of Memory Block Use Statuses," you can see that the number of memory blocks assigned to the logical domain on the BB1 side is 32 GB x 2 (assignment to guest0 and guest1) and 24 GB x 1 (root-dom0).
Note - A root domain (for example, root-dom1 if the BB1 is released) to which I/Os in SPARC M10-4S to be released are assigned is not considered as a target for movement because it is unbound and placed in the inactive state when SPARC M10-4S is released.
  1. iii. Check the empty memory blocks.

    Next, based on the check results obtained in step i, check the memory blocks (hereinafter referred to as "empty memory blocks") not assigned to the logical domain on the SPARC M10-4S that is not disconnected.

    In "Table A-10 Example of Memory Block Use Statuses," you can see that the number of empty memory blocks is 32 GB x 3 and 16 GB x 1.
  1. iv. Check whether the memory block can be moved.

    Using the check results obtained in steps ii and iii, check whether the source memory block can be moved to the empty memory block.

    This is possible if the size of the empty memory block is equal to or greater than that of the source memory block.


    For example, in "Table A-10 Example of Memory Block Use Statuses," there are 32 GB x 3 empty memory blocks as the destinations for guest0 (32 GB), guest1 (32 GB), and root-dom0 (24 GB). So, you can see that the memory is placed such that BB1 can be disconnected. This is summarized in "Table A-11 Destination Candidate Memory Blocks."


    If this is the case, skip to step 12.
Table A-11  Destination Candidate Memory Blocks
SPARC M10-4S Size Logical Domain Destination Candidate
BB1 (target for replacement)
24 GB
root-dom0 -
8 GB
Unassigned -
32 GB
guest0
32 GB in BB0
32 GB
guest1
32 GB in BB0
24 GB
root-dom0
32 GB in BB0
6,912 MB Unassigned Delete this because there is a 32-GB guest1 memory on BB0.
BB0
32 GB
Unassigned Moved here
32 GB
Unassigned Moved here
32 GB
Unassigned Moved here
14 GB
primary -
16 GB
Unassigned Excluded from destination candidates due to size insufficiency
  1. v. If there is no destination available, reduce the memory assignment in the logical domain.

    This step describes how to proceed when there are not enough free memory blocks as destination candidates.

    If the conditions described in "Placement of CPU cores and memory" in "2.5.2 Considerations in System Operation for Dynamic Reconfiguration" are not satisfied, memory block movement will fail.

    For example, in the configuration in "Table A-12 Example of Placement of Memory Blocks With No Destinations," the number of source memory blocks is 32 GB x 1 and 24 GB x 3. Meanwhile, the free memory blocks at the destination are 32 GB x 3 and 16 GB x 1.

    Therefore, you can move one 32-GB memory block (guest0) and two 24-GB memory blocks (either two of guest1, guest2, and root-dom0) in the BB1.

    However, the number of remaining empty destination memory blocks is 16 GB x 1 and 8 GB x 2, obtained after the 24-GB memory block has been moved to a 32-GB memory block. So, any of guest1, guest2, and root-dom0, to which a 24-GB memory block is assigned, cannot be moved.
Table A-12  Example of Placement of Memory Blocks With No Destinations
SPARC M10-4S Size Logical Domain Destination Candidate
BB1 (target for replacement)
24 GB
guest2 May not be movable
8 GB
root-dom1 -
32 GB
guest0
32 GB in BB0
32 GB
guest1 May not be movable
24 GB
root-dom0 May not be movable
6,912 MB Unassigned -
BB0
32 GB
Unassigned Moved here
32 GB
Unassigned Any one of guest1, guest2, and root-dom0 (24 GB) moves here, with 8 GB remaining.
32 GB
Unassigned Any one of guest1, guest2, and root-dom0 (24 GB) moves here, with 8 GB remaining.
14 GB
primary -
16 GB
Unassigned Excluded from destination candidates due to size insufficiency
  1. In this case, you need to reduce the size of a possibly unmovable memory block in the logical domain to be equal to or less than the memory block size at the destination.

    In the above example, you must change the memory size for either one of logical domains guest1, guest2, or root-dom0 from 24 GB to 16 GB or less.

    The following example shows how to change the memory size for guest domain guest2 to 16 GB or less.

      

    Execute the ldm list-domain command to check the logical domain status.
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.5% 40m
guest0 active -n---- 5100 32 32G 0.0% 26m
guest1 active -n---- 5101 32 24G 0.0% 26m
guest2 active -n---- 5102 12 24G 0.1% 1m
root-dom0 active -n--v- 5000 24 24G 0.0% 26m
root-dom1 active -n--v- 5001 16 8G 0.0% 26m
  1. Execute the ldm stop-domain command to stop the domain, and then execute the ldm unbind-domain command to place it in the inactive state.

    The following example executes the command to place guest domain guest2 in the inactive state.
# ldm stop-domain guest2
Ldom guest2 stopped
# ldm unbind-domain guest2
Note - Before stopping a domain with a virtual I/O service configuration, you must first stop the guest domain to which the corresponding virtual I/O devices are assigned by using the ldm stop-domain command.
  1. If the version of Oracle VM Server for SPARC is earlier than 3.2, execute the ldm set-memory command to reconfigure the target logical domain memory.

    Set up the configuration such that the size of guest domain guest2 is 16 GB (16,384 MB) or less and a multiple of (the number of CPU cores x 256 MB) because 6 cores (12vcpu) are assigned to the domain.

    16384 / (6 x 256) = 10.66, rounded down to 10

    Therefore, reconfigure the memory size to (6 x 256 MB) x 10 = 15,360 MB (15 GB) for guest domain guest2.


    The following example executes the ldm set-memory command to reconfigure the memory size to 15 GB.
# ldm set-memory 15G guest2
  1. Execute the ldm list-domain command to confirm that the memory size has been changed as reconfigured.

    The following example executes the ldm list-domain command to show that the memory size of guest domain guest2 is reconfigured to 15 GB.
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.5% 40m
guest0 active -n---- 5100 32 32G 0.0% 26m
guest1 active -n---- 5101 32 24G 0.0% 26m
root-dom0 active -n--v- 5000 24 24G 0.0% 26m
root-dom1 active -n--v- 5001 16 8G 0.0% 26m
guest2 inactive ------ 12 15G
  1. Execute the ldm bind-domain and ldm start-domain commands to bind and start the logical domain.

    The following example executes the ldm bind-domain and ldm start-domain commands to start guest domain guest2.
# ldm bind-domain guest2
# ldm start-domain guest2
LDom guest2 started
  1. Execute the ldm list-domain command, and then confirm that the logical domain has started.

    The following example executes the ldm list-domain command to show that guest domain guest2 has started with the reconfigured memory size.

    Confirm that [STATE] shows "active" and that the second character from the left in the string of [FLAGS] is "n".
# ldm bind-domain guest2 root-dom0
-domain

NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.5% 44m
guest0 active -n---- 5100 32 32G 0.0% 30m
guest1 active -n---- 5101 32 24G 0.0% 30m
guest2 active -n---- 5102 12 15G 0.1% 1m
root-dom0 active -n---- 5000 24 24G 0.0% 30m
root-dom1 active -n---- 5001 16 8G 0.0% 30m
  1. Execute the ldm list-devices -a memory command to check whether the memory block placement satisfies the conditions described in "Placement of CPU cores and memory" in "2.5.2 Considerations in System Operation for Dynamic Reconfiguration."

    The following example executes the ldm list-devices -a memory command.
# ldm ls-devices -a mem
MEMORY
    PA                   SIZE            BOUND
(BB1)
    0x700000000000       32G             guest0  →Move to 0x780000000000 (32GB)
    0x720000000000       24G             root-dom0 →Move to 0x7a0000000000 (32GB)
    0x720600000000       8G
    0x740000000000       8G              root-dom1 (not to be moved)
    0x740200000000       24G
    0x760000800000       1272M           _sys_
    0x760050000000       24G             guest1  →Move to 0x7c0000000000 (32GB)
    0x760650000000       6912M
(BB0)
    0x780000000000       32G
    0x7a0000000000       32G
    0x7c0000000000       32G
    0x7e0000800000       1272M           _sys_
    0x7e0050000000       512M            _sys_
    0x7e0070000000       256M            _sys_
    0x7e0080000000       14G             primary
    0x7e0400000000       15G             guest2
    0x7e07c0000000       1G
  1. Save the configured local domain configuration information in the XSCF.
    Execute the ldm set-spconfig command to save the configured information.
    The following example checks the saved configuration information and then saves it with the same name as that of the existing configuration information.
  1. Execute the ldm list-spconfig command to check the current configuration information.
# ldm list-spconfig
factory-default
ldm-set1 [next poweron]
  1. Execute the ldm remove-spconfig command to delete the configuration information to be overwritten.
# ldm remove-spconfig ldm-set1
  1. Execute the ldm add-spconfig command to re-save the configuration information.
# ldm add-spconfig ldm-set1
  1. Execute the ldm list-spconfig command to check that the saved configuration information has become [current].
# ldm list-spconfig
factory-default
ldm-set1 [current]
  1. Save the configured logical domain configuration information to an XML file.
    To guard against the configuration information saved to the XSCF being unusable, save the configuration information to an XML file. It is recommended that the XML file be saved to a different media.
    The following describes the example procedure.
  1. Execute the ldm list-domain command to check that all the logical domains are active.
# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 16 14G 0.0% 10h 7m
guest0 active -n---- 5100 32 32G 0.0% 2h 20s
guest1 active -n---- 5101 32 32G 0.0% 2h 5s
root-dom0 active -n---- 5000 24 24G 0.0% 2h 43s
root-dom1 active -n---- 5001 24 24G 0.0% 2h 20s
  1. Execute the ldm list-constraints command to save the configuration information to an XML file.
# ldm list-constraints -x > /ldm-set1.xml