Problems Resolved in Oracle Solaris 11.4
Problems Resolved in Oracle Solaris 11.4
The following table lists the problems resolved in Oracle Solaris 11.4. You might encounter them in supported releases earlier than Oracle Solaris 11.4.
Bug | 24718083 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | If the XSCF is rebooted while the control domain is running, the power capping function will not work. |
Workaround | This has been modified with SRU 11.3.23.5.0. [How to restore] Execute the svcadm command to restart the ldoms/ldmd service. # svcadm restart ldmd |
Bug | 24819902 |
Model | SPARC M12-2S, SPARC M10-4S |
Description | In the control domain with SRU 11.3.8.7.0 (Oracle VM Server for SPARC 3.4) or later applied, the deleteboard command may behave inappropriately where at least one logical domain binds a core shared with another logical domain. This is because Oracle VM Server for SPARC wrongly calculates the number of CPU cores to be removed. [Example] - The deleteboard -m unbind=resource command succeeds but more CPUs are removed than necessary. - The deleteboard -m unbind=resource command fails even if the CPU allocation is the same as one where it succeeds with Oracle VM Server for SPARC 3.3. - The deleteboard -m unbind=none command fails even though there are sufficient free CPUs in the remaining system boards. |
Workaround | This has been modified with SRU 11.3.23.5.0. Apply either of the following (1) or (2). (1) Assign virtual CPUs with the whole-core constraint. (2) Change the PPAR DR policy to "targeted". # svcadm disable ldmd # svccfg -s ldmd setprop ldmd/fj_ppar_dr_policy=targeted # svcadm refresh ldmd # svcadm enable ldmd |
Bug | 25028104 |
Model | SPARC M12-2S, SPARC M10-4S |
Description | If the root complex is dynamically added/deleted by dynamic reconfiguration of the physical partition, the PCIe end point device under the added/deleted root complex may not be reflected to the PCIe end point device configuration information shown by the XSCF showpparinfo(8) command. |
Workaround | [How to restore] Execute the svcadm command in the control domain to restart the ldoms/ldmd services. # svcadm restart ldoms/ldmd |
Bug | 26822709 26934364 |
Model | SPARC M12-2S, SPARC M10-4S |
Description | On the control domain with SRU 11.3.8.7.0 (Oracle VM Server for SPARC 3.4) or later installed, dynamic reconfiguration of a physical partition with the XSCF deleteboard(8) command may fail. This phenomenon may occur when the following conditions are met. 1) Memory degradation has occurred (*1). 2) The PPAR DR policy is set to "ratio" or "auto" (*2). Or 1) hvdump-reboot is set to "off" (*3). 2) The physical partition has been reset due to the XSCF reset command with the xir option or a hardware error. 3) The PPAR DR policy is set to "ratio" or "auto" (*2). *1 How to check the memory degradation primary# ldm list-devices -a -S memory If the STATUS value is "fail" in the output, you can determine that the memory area has been degraded by SIZE from the PA (physical address) displayed on the same line. *2 How to check the PPAR DR policy primary# svccfg -s ldmd listprop ldmd/fj_ppar_dr_policy ldmd/fj_ppar_dr_policy astring auto In this case, the PPAR DR policy is set to "auto." *3 How to check hvdump-reboot primary# ldm list-hvdump hvdump=on hvdump-reboot=off In this case, hvdump-reboot is set to "off." |
Workaround | This has been modified with SRU 11.3.29.5.0. Change the PPAR DR policy to "targeted" in the following way. primary# svcadm disable ldmd primary# svccfg -s ldmd setprop ldmd/fj_ppar_dr_policy=targeted primary# svcadm refresh ldmd primary# svcadm enable ldmd [How to restore] Restart the physical partition with the XSCF poweroff(8) and poweron(8) commands. Example: Physical partition is 0 XSCF> poweroff -y -p 0 XSCF> poweron -y -p 0 |
Bug | 26822709 26934364 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | On the control domain with SRU 11.3.2.4.0 (Oracle VM Server for SPARC 3.3) or later installed, the Oracle Solaris ldm remove-memory command with the -g option for memory operation may fail. This phenomenon may occur when the following conditions are met. 1) Memory degradation has occurred (*1). Or 1) hvdump-reboot is set to "off" (*2). 2) The physical partition has been reset due to the XSCF reset(8) command with the xir option or due to a hardware error. *1 How to check the memory degradation primary# ldm list-devices -a -S memory If the STATUS value is "fail" in the output, the memory area has been degraded by SIZE from the PA (physical address) displayed on the same line. *2 How to check hvdump-reboot primary# ldm list-hvdump hvdump=on hvdump-reboot=off In this case, hvdump-reboot is set to "off." |
Workaround | This has been modified with SRU 11.3.29.5.0. |
Bug | 26822709 26934364 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | On the control domain with SRU 11.3.2.4.0 (Oracle VM Server for SPARC 3.3) or later installed, the Oracle Solaris ldm list-rsrc-group command may display memory amounts with incorrect values. This phenomenon may occur when the following conditions are met. 1) Memory degradation has occurred (*1). Or 1) hvdump-reboot is set to "off" (*2). 2) The physical partition has been reset due to the XSCF reset(8) command with the xir option or due to a hardware error. *1 How to check memory degradation primary# ldm list-devices -a -S memory If the STATUS value is "fail" in the output, you can determine that the memory area has been degraded by SIZE from the PA (physical address) displayed on the same line. *2 How to check hvdump-reboot primary# ldm list-hvdump hvdump=on hvdump-reboot=off In this case, hvdump-reboot is set to "off." |
Workaround | This has been modified with SRU 11.3.29.5.0. |
Bug | 24567071 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S |
Description | If dynamic PCIe bus assignment and the dynamic reconfiguration function for PCIe end points are executed in the following procedure, only one of the two onboard ixgbe ports of the SPARC M12 is recognized. 1. Assign the PCIe bus that is connected to the two onboard ixgbe ports to the root domain by using the dynamic PCIe bus assignment function. 2. Delete the two onboard ixgbe ports under the PCIe bus assigned in step 1 from the root domain by using the dynamic reconfiguration function for PCIe end points. 3. Assign the two ixgbe ports deleted in step 2 to an I/O domain by using the dynamic reconfiguration function for PCIe end points. 4. Delete the two ixgbe ports added in step 3 from the I/O domain by using the dynamic reconfiguration function for PCIe end points. 5. Return the two ixgbe ports deleted in step 4 to the root domain by using the dynamic reconfiguration function for PCIe end points. |
Workaround | This has been modified with SRU 11.3.17.5.0. Apply SRU 11.3.17.5.0 or later to the control and root domains. [How to restore] Using the dynamic PCIe bus assignment function, delete the PCIe bus from the root domain and then assign the PCIe bus to the root domain again. |
Bug | 24754492 |
Model | SPARC M12-2, SPARC M12-2S, SPARC M10-4, SPARC M10-4S |
Description | A logical domain panic may occur or the ldmd daemon may generate a core dump when you execute the ldm shrink-socket command to remove memory from logical domains if both of the following conditions are met. - Oracle Solaris is running in logical domains. - Multiple memory blocks on the same CPU socket are allocated to logical domains. You can check memory allocation of each CPU socket's memory by the ldm list-socket command. In the following example, two memory blocks on CPU socket #0 are allocated to logical domain domain-A. # ldm list-socket -o memory domain-A MEMORY PA SIZE SOCKET_ID BOUND 0x7e0450000000 16G 0 domain-A 0x7e08d0000000 1536M 0 domain-A |
Workaround | This has been modified with SRU 11.3.14.6.0. Execute the ldm remove-memory command in place of the ldm shrink-socket command to remove memory from logical domains. |
Bug | 24973255 |
Model | SPARC M12-2, SPARC M12-2S |
Description | Even when the following settings are implemented for Oracle Solaris kernel zones, warm/live migration of the Oracle Solaris kernel zone that is set with cpu-arch=sparc64-class1 from a SPARC M12 system to a SPARC M10 system may fail. /etc/system file ----------------------------------------------------------------- set enable_1ghz_stick = 1 set uhrt_enable=0x0 ----------------------------------------------------------------- For details on settings in the /etc/system file, see "8.2.3 Notes on Oracle Solaris Kernel Zones" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide. This bug occurs when the OS of the Oracle Solaris kernel zone is Oracle Solaris 11.3 SRU 11.3.15.4.0 or Oracle Solaris 11.3 SRU 11.3.16.3.0. |
Workaround | This has been modified with SRU 11.3.17.5.0. Apply SRU 11.3.17.5.0 to the Oracle Solaris kernel zone for which warm/live migration is performed. |
Bug | 21654442 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | If the Virtual SCSI Host Bus Adapter function is used, the following warning message may be output for the root domain or an I/O domain. [Example] vsan: WARNING: vsan_walk_match_func: iport-dip(40010ef2fd8): dip(40010ef29b8): prop(class) not found |
Workaround | This has been modified with SRU 11.2.11.5.0. This message has no effect on the functioning of the system, ignore it. |
Bug | 21779989 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | If you execute the ldm list-hba command supported in Oracle VM Server for SPARC 3.3 in a system with the PCI expansion unit connected, the device alias of SLOT4 and later of the PCI expansion unit is displayed by mistake. [Example] # ldm list-hba -l -t -d primary NAME VSAN ---- ---- /SYS/MBU/SASHBA/HBA0/PORTf [/pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f] : /SYS/PCI1/SLOT2/HBA0/PORT0/0/f SLOT4 is correct. [/pci@8100/pci@4/pci@0/pci@1/pci@0/pci@0/pci@0/pci@1/pci@0/pci@10/pci@0/pci@0/LSI,sas@0/iport@f] init-port w500605b0045c8a90 Transport Protocol SAS /pci@8100/pci@4/pci@0/pci@1/pci@0/pci@0/pci@0/pci@1/pci@0/pci@10/pci@0/pci@0/LSI,sas@0/iport@f/smp@w50080e52b93fd03f tport w50080e52b93fd03f c31t50000394281BBA1Ad0s0 tport w50000394281bba1a lun 0 /pci@8100/pci@4/pci@0/pci@1/pci@0/pci@0/pci@0/pci@1/pci@0/pci@10/pci@0/pci@0/LSI,sas@0/iport@f/enclosure@w50080e52b93fd03d,0 tport w50080e52b93fd03d lun 0 : |
Workaround | This has been modified with SRU 11.2.11.5.0. This problem may occur even after SRU 11.3.4.5.0 is applied. In such a case, follow the procedure in "10.12 Saving/Restoring Logical Domain Configuration Information in an XML File" in the Fujitsu SPARC M12 and Fujitsu M10/SPARC M10 System Operation and Administration Guide to save the logical domain configuration information. Then, restore it. [Alternative method] If you execute vHBA related commands such as ldm add-vsan, specify the device path instead of the device alias. [Example] # ldm add-vsan /pci@8100/pci@4/pci@0/pci@1/pci@0/pci@0/pci@0/pci@1/pci@0/pci@10/pci@0/pci@0/LSI,sas@0/iport@f vsan0 ldom1 |
Bug | 20646928 |
Model | SPARC M12-2S, SPARC M10-4S |
Description | If you perform dynamic deletion with the dynamic reconfiguration function of the physical partition for a system board with the system volume located in the following environment, system board deletion processing may fail. [Problem occurrence conditions] 1) A ZFS mirror of the system volume has been created using multiple disks, and 2) the mirroring disk and mirrored disk are located on different system boards, and 3) the system board to which the system volume is assigned has been dynamically deleted. [Example of output message] XSCF> deleteboard -c disconnect -m unbind=resource 00-0 PSB#00-0 will be unconfigured from PPAR immediately. Continue?[y|n] :y All domains are temporarily suspended, proceed?[y|n] :y Start unconfigure preparation of PSB. [1200sec] 0end Unconfigure preparation of PSB has completed. Start unconfiguring PSB from PPAR. [7200sec] 0....\ The removal of PCIE0 from the domain primary failed. Error message from svc:/ldoms/agents in domain primary: ERROR: devices or resources are busy. end PSB#00-0 could not be unconfigured from PPAR-ID 0 due to operating system or Logical Domains Manager error. |
Workaround | This has been modified with SRU 11.3.5.6.0. |
Bug | 25976819 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | In an environment with Oracle VM Server for SPARC 3.3 or later installed on the control domain, a logical domain may not recognize the replacement CPU after a faulty CPU was automatically replaced. |
Workaround | This has been modified with SRU 11.3.23.5.0. [How to restore] Reboot the logical domain where the problem occurred. |
Bug | 26052693 |
Model | SPARC M12-2S |
Description | In an environment with Oracle VM Server for SPARC 3.4.0.3 (SRU 11.3.18.6.0) or later installed on the control domain, if you reboot a logical domain after dynamically reconfiguring physical partitions with the XSCF deleteboard(8) command, then a panic on the logical domain may persist. [Panic message example] panic[cpu0]/thread=xxxxxxxx: tilelet_assign_cb: assigning pfns [4a0000, 4a8000) to mgid 39, mnodeid 57: pachunk 12 already assigned to mgid 3b, mnodeid 59 |
Workaround | This has been modified with SRU 11.3.23.5.0. [How to restore] Take action as follows for each logical domain where a panic occurred. - If a panic occurs on the control domain 1. Stop the physical partition. 2. With the XSCF addboard(8) command, return the physical partition to the state before the reconfiguration. 3. Start the physical partition again. [Example] XSCF> poweroff -y -p 0 XSCF> addboard -y -c configure -m bind=none -p 0 01-0 XSCF> poweron -y -p 0 - If a panic occurs on a guest domain Unbind the resource, and bind the resource again. [Example] primary# ldm stop-domain guest primary# ldm unbind-domain guest primary# ldm bind-domain guest primary# ldm start-domain guest |
Bug | 26176351 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S |
Description | On a logical domain with SRU 11.3.23.5.0 or later installed, even when the power-saving operation of the physical partition is set to "elastic", the CPU frequency may not decrease. This is because Solaris Power Aware Dispatcher (PAD) does not work for the CPU that the logical domain uses. |
Workaround | This has been modified with SRU 11.3.23.5.0. Install SRU 11.3.23.5.0 or later on the control domain. [How to restore] After starting all of the logical domains on which SRU 11.3.23.5.0 or later is installed, set the power-saving operation to "disabled" with the XSCF setpparmode(8) command. Then, set the power-saving operation to "elastic" again. [Example] XSCF> setpparmode -p 0 -y -m powermgmt_policy=disabled XSCF> setpparmode -p 0 -y -m powermgmt_policy=elastic |
Bug | 27112303 |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | In an environment with Oracle VM Server for SPARC 3.1.0.1 or later installed on the control domain, the power-saving operation of the physical partition does not work as "elastic" if the following conditions are met: 1) the power-saving operation of the physical partition is set to "elastic"; and 2) the physical partition is operating; and 3) the physical partitions have been dynamically reconfigured with the XSCF deleteboard(8) command or addboard(8) command. Or 3) the number of CPU Activations has been changed with the XSCF setcod(8) command. |
Workaround | There is no effective workaround. [How to restore] Set the power-saving operation to "disabled" with the XSCF setpparmode(8) command and then set the power-saving operation to "elastic" again. [Example] - SPARC M12 system XSCF> setpparmode -p 0 -y -m powermgmt_policy=disabled XSCF> setpparmode -p 0 -y -m powermgmt_policy=elastic - SPARC M10 system XSCF> setpparmode -p 0 -y -m elastic=off XSCF> setpparmode -p 0 -y -m elastic=on |
Bug | - |
Model | SPARC M12-1, SPARC M12-2, SPARC M12-2S, SPARC M10-1, SPARC M10-4, SPARC M10-4S |
Description | When a CPU fails, the fmd daemon may accidentally put another CPU in the faulted state. In that case, the faulty CPU will not be in the faulted state. There is no way to check whether this problem will occur. |
Workaround | Every time you boot the control domain, confirm that the ldmd service is online, and then restart the fmd service. 1) Confirm that the ldmd service is online. primary# svcs ldmd STATE STIME FMRI online 15:42:57 svc:/ldoms/ldmd:default 2) Restart the fmd service. primary# svcadm restart fmd 3) Confirm that the fmd service is online. primary# svcs fmd STATE STIME FMRI online 15:43:44 svc:/system/fmd:default0 |
< Previous Page | Next Page >