谁做过简单的ntc测温电路路啊

Known Issues and Bugs - Oracle Solaris Cluster 4.1 Release Notes
JavaScript is required to for searching.
 Search Scope:
  This Document
  Entire Library
Known Issues and Bugs
The following known issues and bugs affect the operation of the Oracle Solaris
Cluster and Oracle Solaris Cluster Geographic Edition 4.1 software, as of the time
of release. Bugs and issues are grouped into the following categories:
Contact your Oracle support representative to see whether a fix becomes available.
Administration
A clzc reboot Command Causes the solaris10 Brand Exclusive-IP Zone Cluster to Panic the Global Zone Nodes ()
Problem Summary: A reboot or halt of a solaris10 branded exclusive–IP zone cluster node can
cause the global zone nodes to panic. This occurs when the zone cluster
nodes use the base network as the primary (public) network interface and there
are VNICs on that base network interface that are configured for other zone
cluster nodes in that cluster.
Workaround: Create and use VNICs as primary network interfaces for exclusive-IP zone clusters.
The /usr/sbin/shutdown Command in a Zone of an Exclusive-IP Zone Cluster Can Result in a Halt of Other Running Zones of the Zone Cluster ()
Problem Summary: If you use the /usr/sbin/shutdown command in a zone of an exclusive-IP
zone cluster to halt or reboot the zone, any other zones of the
zone cluster that are alive and running can be halted by cluster software.
Workaround: Do not use the /usr/sbin/shutdown command inside a zone of an exclusive-IP
zone cluster to halt or reboot the zone. Instead, use the /usr/cluster/bin/clzonecluster command in
the global zone to halt or reboot a zone of an exclusive-IP
zone cluster. The /usr/cluster/bin/clzonecluster command is the correct way to halt or reboot a
zone of any type of zone cluster.
If you see this problem,
use the /usr/cluster/bin/clzonecluster command to boot any such zones that were halted
by cluster software.
The svc_private_network:default SMF Service Goes Into Maintenance in a solaris10 Brand Exclusive-IP Zone Cluster ()
Problem Summary: When you perform system identification in a zone of a solaris10 brand
exclusive-IP zone cluster, the svc_private_network:default SMF service goes into maintenance in that zone.
On subsequent reboots of the zone, the problem does not occur.
Workaround: After you perform system identification configuration in a zone of a solaris10
brand exclusive-IP zone cluster, reboot that zone.
Cannot Set the Jumbo Frame MTU Size for the clprivnet Interface ()
Problem Summary: The MTU of the cluster clprivnet interface is always set to the
default value of 1500 and does not match the MTU of the underlying
private interconnects. Therefore, you cannot set the jumbo frame MTU size for the
clprivnet interface.
Workaround: There is no known workaround.
Public Net Failure Does Not Fail Over DB Server Resource with SCAN Listener ()
Problem Summary: The HA-Oracle database resource will not fail over when the public network
fails when the HA-Oracle database is configured to use the Grid Infrastructure SCAN
Workaround: When using the Oracle Grid Infrastructure SCAN listener with an HA-Oracle database,
add a logical host with an IP address that is on the same
subnet as the SCAN listener to the HA-Oracle database resource group.
The Data Service Configuration Wizards Do Not Support Storage Resources and Resource Groups for Scalable HAStoragePlus (7202824)
Problem Summary: The existing data service configuration wizards do not support configuring scalable HAStoragePlus
resources and resource groups. In addition, the wizards are also not able to
detect existing resources and resource groups for scalable HAStoragePlus.
For example, while configuring HA for WebLogic Server in multi-instance mode, the wizard
will display No highly available storage resources are available for selection., even when there are existing scalable HAStoragePlus resources and resource groups
on the cluster.
Workaround: Configure data services that use scalable HAStoragePlus resources and resource groups in
the following way:
Use the clresourcegroup and clresource commands to configure HAStoragePlus resources groups and resources in scalable mode.
Use the clsetup wizard to configure data services as if they are on local file systems, meaning as if no storage resources are involved.
Use the CLI to create an offline-restart dependency on the scalable HAStoragePlus resources, which you configured in Step 1, and a strong positive affinity on the scalable HAStoragePlus resource groups.
Removing a Node From an Exclusive-IP Zone Cluster Panics Cluster Nodes (7199744)
Problem Summary: When a zone-cluster node is removed from an exclusive-IP zone cluster, the
global—cluster nodes that host the exclusive-IP zone cluster panic. The issue is seen
only on a global cluster with InfiniBand interconnects.
Workaround: Halt the exclusive-IP zone cluster before you remove the zone-cluster node.
Nonexisting privnet Stops Zone Clusters From Booting Despite Good privnet (7199431)
Problem Summary: If invalid or nonexisting network links are specified as the privnetresources in
an exclusive-IP zone cluster configuration (ip-type=exclusive), the zone-cluster node fails to join the zone
cluster despite presence of valid privnet resources.
Workaround: Remove the invalid privnet resource from the zone cluster configuration, then reboot
the zone-cluster node.# clzonecluster reboot -n nodename zone-cluster
Alternatively, create the missing network link that corresponds to the invalid privnet resource,
then reboot the zone. See the
man page for more information.
The clzonecluster Command Fails to Verify That defrouter Cannot Be Specified Without allowed-addr, CCR Has Failed Configuration (7199135)
Problem Summary: In an exclusive-IP zone cluster, if you configure a net resource
in the node scope with the defrouter property specified and the allowed-address property
unspecified, the Oracle Solaris software errors out. Oracle Solaris software requires that, for an
exclusive-IP zone cluster, you must always specify allowed-address property if you specify
the defrouter property. If you do not, the Oracle Solaris software reports the
proper error message, but the cluster would have already populated the CCR with the
zone-cluster information. This action leaves the zone cluster in the Unknown state.
Workaround: Specify the allowed-address property for the zone cluster.
clzonecluster boot, reboot, and halt Subcommands Fail if Any One of the Cluster Nodes Is Not in the Cluster (7193998)
Problem Summary: The clzonecluster boot, reboot, and halt subcommands fail, even if one of the
cluster nodes is not in the cluster. An error similar to the following
is displayed:root@pnode1:~# clzc reboot zoneclustername
(C827595) "pnode2" is not in cluster mode.
(C493113) No such object.
root@pnode1:~# clzc halt zoneclustername
(C827595) "pnode2" is not in cluster mode.
(C493113) No such object.
The clzonecluster boot, reboot, and halt subcommands should skip over nodes that are in noncluster
mode, rather than fail.
Workaround: Use the following option with the clzonecluster boot or clzonecluster halt commands to specify
the list of nodes for the subcommand:-n nodename[,…]
The -n option allows running the subcommands on the specified subset of nodes.
For example, if, in a three-node cluster with the nodes pnode1, pnode2, and pnode3,
the node pnode2 is down, you could run the following clzonecluster subcommands to exclude
the down node:clzonecluster halt -n pnode1,pnode3 zoneclustername
clzonecluster boot -n pnode1,pnode3 zoneclustername
clzonecluster reboot -n pnode1,pnode3 zoneclustername
Cluster File System Does Not Support Extended Attributes (7167470)
Problem Summary: Extended attributes are not currently supported by cluster file systems. When a
user mounts a cluster file system with the xattrmount option, the following behavior
The extended attribute operations on a regular file will fail with a ENOENT error.
The extended attribute operations on a directory will result as normal operations on the directory itself.
So any program accessing the extended attributes of files in a cluster file
system might not get the expected results.
Workaround: Mounted a cluster file system with the noxattrmount option.
Using chmod to Set setuid Permission Returns Error in a Non–Global Zone on PxFS Secondary Server (7020380)
Problem Summary: The chmod command might fail to change setuid permissions on a file
in a cluster file system. If the chmod command is run on a
non-global zone and the non-global zone is not on the PxFS primary server,
the chmod command fails to change the setuid permission.
For example:# chmod 4755 /global/oracle/test-file
chmod: WARNING: can't change /global/oracle/test-file
Workaround: Do one of the following:
Perform the operation on any global-cluster node that accesses the cluster file system.
Perform the operation on any non-global zone that runs on the PxFS primary node that has a loopback mount to the cluster file system.
Switch the PxFS primary to the global-cluster node where the non-global zone that encountered the error is running.
Cannot Create a Resource From a Configuration File With Non-Tunable Extension Properties (6971632)
Problem Summary: When you use an XML configuration file to create resources, if any
of the resources have extension properties that are not tunable, that is, the
Tunable resource property attribute is set to None, the command fails to create
the resource.
Workaround: Edit the XML configuration file to remove the non-tunable extension properties from
the resource.
Disabling Device Fencing While Cluster Is Under Load Results in Reservation Conflict (6908466)
Problem Summary: Turning off fencing for a shared device with an active I/O load
might result in a reservation conflict panic for one of the nodes that
is connected to the device.
Workaround: Quiesce I/O to a device before you turn off fencing for that
EMC SRDF Rejects Switchover When Replicated Device-Group Status Will Cause Switchover and Switchback to Fail (6798901)
Problem Summary: If an EMC SRDF device group whose replica pair is split, attempts
to switch the device group over to another node, the switchover fails. Furthermore, the
device group is unable to come back online on the original node until
the replica pair is been returned to a paired state.
Workaround: Verify that SRDF replicas are not split, before you attempt to switch
the associated Oracle Solaris Cluster global-device group to another cluster node.
Removing Nodes From the Cluster Configuration Can Result in Node Panics (6735924)
Problem Summary: Changing a cluster configuration from a three-node cluster to a two-node cluster
might result in complete loss of the cluster, if one of the remaining
nodes leaves the cluster or is removed from the cluster configuration.
Workaround: Immediately after removing a node from a three-node cluster configuration, run the
cldevice clear command on one of the remaining cluster nodes.
More Validation Checks Needed When Combining DIDs (6605101)
Problem Summary: The cldevice command is unable to verify that replicated SRDF devices that
are being combined into a single DID device are, in fact, replicas of
each other and belong to the specified replication group.
Workaround: Take care when combining DID devices for use with SRDF. Ensure that
the specified DID device instances are replicas of each other and that they
belong to the specified replication group.
Data Services
Active-Standby Configuration Not Supported for HA for TimesTen ()
Problem Summary: The TimesTen active-standby configuration requires an integration of Oracle Solaris Cluster methods in
the TimesTen ttCWadmin utility. This integration has not yet occurred, even though it
is described in the . Therefore, do not use the TimesTen active-standby configuration with Oracle
Solaris Cluster HA for TimesTen and do not use the TimesTen ttCWadmin
utility on Oracle Solaris Cluster.
The Oracle Solaris Cluster TimesTen data service comes with a set of resource
types. Most of these resource types are meant to be used with TimesTen
active-standby configurations, You must use only the ORCL.TimesTen_server resource type for your highly available
TimesTen configurations with Oracle Solaris Cluster.
Workaround: Do not use the TimesTen active-standby configuration.
Failure to Update Properties of SUNW.ScalMountPoint Resource Configured with NAS for Zone Cluster (7203506)
Problem Summary: The update of any properties in a SUNW.ScalMountPoint resource that is
configured with a NAS file system for a zone cluster can fail with
an error message similar to the following:clrs:
hostname:zone-cluster : Bad address
Workaround: Use the clresource command to delete the resource and then recreate resource
with all required properties.
Global File System Configured in Zone Cluster's Scalable HAStoragePlus Resource Is Not Accessible (7197623)
Problem Summary: Consider a cluster file system with the following entry in the global
cluster's /etc/vfstab file, with a mount-at-boot value of no:# cat /etc/vfstab
/dev/md/datadg/dsk/d0
/dev/md/datadg/rdsk/d0 /global/fs-data ufs
logging,global
When an HAStoragePlus resource is created in a zone cluster's scalable resource group
and the above cluster file system has the mount-at-boot value set tono, the
cluster file system data might not be visible through the zone-cluster node mount point.
Workaround: Perform the following steps to avoid the problem:
From one global-cluster node, take offline the zone cluster's scalable resource group that contains HAStoragePlus.# clresourcegroup offline -Z zonecluster scalable-resource-group
In the /etc/vfstab file on each global-cluster node, change the mount-at-boot value of the cluster file system entry to yes./dev/md/datadg/dsk/d0
/dev/md/datadg/rdsk/d0 /global/fs-data ufs
logging,global
From one global-cluster node, bring online the zone cluster's scalable resource group that contains HAStoragePlus.# clresourcegroup online -Z zonecluster scalable-resource-group
RAC Wizard Failing With "ERROR: Oracle ASM is either not installed or the installation is invalid!" (7196184)
Problem Summary: The Oracle RAC configuration wizard fails with the message, ERROR: Oracle ASM is either not installed or the installation is invalid!.
Workaround: Ensure that the “ASM” entry is first within the /var/opt/oracle/oratab file, as
follows:root@phys-schost-1:~# more /var/opt/oracle/oratab
+ASM1:/u01/app/11.2.0/grid:N
# line added by Agent
MOON:/oracle/ora_base/home:N
clsetup Wizard Fails While Configuring WebLogic Server Domain in the Zones/Zone Cluster With WebLogic Server Installed in the NFS (7196102)
Problem Summary: The configuration of the HA-WebLogic Server resource using the clsetup wizard inside
a zone/zone cluster would fail if the WebLogic Server is installed on an
NFS mount point.
This issue won't occur with the NFS storage on global cluster, and if
storage other than NFS is used.
Condition for this issue to occur : Mount the NFS storage with WebLogic
Server installed inside the zones and configure the WebLogic Server using the clsetup
Error Message : ERROR: The specified path is not a valid WebLogic Server domain location. Similar message will be displayed for Home Location, Start Script and Environment file
Finally it fails in Administration/Managed/RPS server discovery.Not able to find the WebLogic Administration Server Instance.
Make sure the provided WebLogic Domain Location (<DOMAIN_LOCATION_PROVIDED>)
is the valid one.
No Reverse Proxy Server Instances found. You can't proceed further.
No Managed Server instances found. You can't proceed further.
Workaround: Configure the WebLogic Server resource manually.
With a Large Number of Non-Network-Aware GDS Resources, Some Fail to Restart and Remain Offline (7189659)
Problem Summary: This problem affects Generic Data Service (GDS) resources that meet all of
the following conditions:
No custom probe script is configured
The network_aware property is set to FALSE.
The Retry_count property is set to -1.
If the resources continue to fail to start, GDS will continue to restart
it, forever. There is an issue where the error Restart operation failed: cluster is reconfiguringis produced. This results
in the GDS resource not being automatically restarted.
Workaround: Manually disable and then re-enable the affected GDS resources.
SUNW.Proxy_SMF_failover sc_delegated_restarter File Descriptor Leak (7189211)
Problem Summary: Every time the SMF proxy resource SUNW.Proxy_SMF_failover is disabled or enabled,
the file descriptor count increases by one. Repeated switches can grow the file
descriptors to 256 and reach the limit at which point the resource cannot
be switched online anymore.
Workaround: Disable and re-enable the sc_restarter SMF service.# svcadm disable sc_restarter
# svcadm enable sc_restarter
When set Debug_level=1, pas-rg Fails Over to Node 2 And Cannot Start on Node 1 Anymore (7184102)
Problem Summary: If you set the Debug_level property to 1, a start of a
dialogue instance resource is impossible on any node.
Workaround: Use Debug_level=2, which is a superset of Debug_level=1.
Scalable Applications Are Not Isolated Between Zone Clusters (6911363)
Problem Summary: If scalable applications configured to run in different zone clusters bind to
INADDR_ANY and use the same port, then scalable services cannot distinguish between the
instances of these applications that run in different zone clusters.
Workaround: Do not configure the scalable applications to bind to INADDR_ANY as the
local IP address, or bind them to a port that does not conflict
with another scalable application.
Running clnas add or clnas remove Command on Multiple Nodes at the Same Time Could Cause Problem (6791618)
Problem Summary: When adding or removing a NAS device, running the clnas addor clnas removecommand
on multiple nodes at the same time might corrupt the NAS configuration file.
Workaround: Run the clnas addor clnas removecommand on one node at a time.
Developer Environment
clresource show -p Command Returns Wrong Information (7200960)
Problem Summary: In a solaris10 brand non-global zone, the clresource show -p property command returns the wrong information.
Workaround: This bug is caused by pre-Oracle Solaris Cluster 4.1 binaries in the
solaris10 brand zone. Run the following command from the global zone to get
the correct information about local non-global zone resources:# clresource show -p property -Z zone-name
Geographic Edition
Cluster Node Does Not Have Access to Sun ZFS Storage Appliance Projects or iSCSI LUNs ()
Problem Summary: If a node leaves the cluster when the site is the
primary, the projects or iSCSI LUNs are fenced off. However, after a switchover or
takeover when the node joins the new secondary, the projects or iSCSI LUNs
are not unfenced and the applications on this node are not able to
access the file system after it is promoted to the primary.
Workaround: Reboot the node.
DR State Stays Reporting unknown on One Partner (7189050)
Problem Summary: DR state stays reporting unknown, although DR resources are correctly reporting replication
Workaround: Run the geopg validate protection-group command to force a resource-group state notification to the
protection group.
Takeover to the Secondary Is Failing Because fs umount Failed On the Primary (7182720)
Problem Summary: Takeover of a protection group fails if umount of the file system
fails on the primary site.
Workaround: Perform the following steps:
Issue fuser -cu file-system.
Check for non-application process IDs, like cd, on the primary site.
Terminate such processes before you perform a takeover operation.
ZFS Storage Appliance Protection Group Creation And Validation Fail if Project Replication Is Stopped by Using the BUI (7176292)
Problem Summary: If you use the browser user interface (BUI) to stop replication, the
protection group goes to a configuration error state when protection-group validation fails.
Workaround: From the BUI, perform the following actions to stop replication:
Under the Shares tab, select the project being replicated.
Click on the Replication tab and select the Scheduled option.
Wait until the status changes to manual, then click the Enable/Disable button.
Multiple Notification Emails Sent From Global Cluster When Zone Clusters Are in Use (7098290)
Problem Summary: If Oracle Solaris Cluster Geographic Edition is configured in a zone
cluster, duplicate notification emails about loss of connection to partner clusters are sent from
both the zone cluster and the global cluster. The emails should only be
sent from the zone cluster.
Workaround: This is a side effect of the cluster event handling. It is
harmless, and the duplicates should be ignored.
Installation
Unable to Install Data Service Agents on Existing 3.3 5/11 solaris10 Brand Zone Without Specifying Patch Options (7197399)
Problem Summary: When installing agents in a solaris10 brand non-global zone from an Oracle
Solaris Cluster 3.3 or 3.3 5/11 DVD, the clzoncecluster install-clustercommand fails if you
do not specify the patches that support solaris10 branded zones.
Workaround: Perform the following steps to install agents from an Oracle Solaris Cluster
3.3 or 3.3 5/11 DVD to a solaris10 brand zone:
Reboot the zone cluster into offline mode.# clzonecluster reboot -o zonecluster
Run the clzonecluster install-cluster command, specifying the information for the core patch that supports solaris10 branded zones.# clzonecluster install-cluster -d dvd -p patchdir=patchdir[,patchlistfile=patchlistfile] \
-n node[,…]] zonecluster
After installation is complete, reboot the zone cluster to bring it online.# clzonecluster reboot zonecluster
clzonecluster Does Not Report Errors When install Is Used Instead of install-cluster for solaris10 Branded Zones (7190439)
Problem Summary: When the clzonecluster installcommand is used to install from an Oracle Solaris Cluster
release DVD, it does not print any messages but nothing is installed onto
the nodes.
Workaround: To install the Oracle Solaris Cluster release in a solaris10 branded
zone, do not use the clzonecluster install command, which is used to install the
Oracle Solaris 10 image. Instead, use the clzonecluster install-cluster command.
ASM Instance Proxy Resource Creation Errored When a Hostname Has Uppercase Letters (7190067)
Problem Summary: The use of uppercase letters in the cluster node hostname causes the
creation of ASM instance proxy resources to fail.
Workaround: Use only lowercase letters for the cluster-node hostnames when installing Oracle Solaris
Cluster software.
Wizard Won't Discover the ASM SID (7190064)
Problem Summary: When using the clsetup utility to configure the HA for Oracle or
HA for Oracle RAC database, the Oracle ASM System Identifier screen is not
able to discover or configure the Oracle ASM SID when a cluster node
hostname is configured with uppercase letters.
Workaround: Use only lowercase letters for the cluster-node hostnames when installing Oracle Solaris
Cluster software.
RAC Proxy Resource Creation Fails When the Cluster Node's Hostname Has Uppercase Letters (7189565)
Problem Summary: The use of uppercase letters in the cluster node hostname causes the
creation of RAC database proxy resources to fail.
Workaround: Use only lowercase letters for the cluster-node hostnames when you install Oracle
Solaris Cluster software.
Hard to Get Data Service Names for solaris10 Brand Zone Noninteractive Data Service Installation (7184714)
Problem Summary: It is hard to know what is the agent names to
specify when using the clzonecluster install-cluster command to install agents with the -s option.
Workaround: When using the clzonecluster install-cluster
-d dvd -s {all | software-component[,…]} options zone-cluster command to create a solaris10 brand zone
cluster, you can specify the following cluster components with the -s option:
ebs (SPARC only)
obiee (SPARC only)
pax (SPARC only)
PeopleSoft (SPARC only)
PostgreSQL
saa (SPARC only)
sag (SPARC only)
siebel (SPARC only)
xvm (SPARC only)
cacao Cannot Communicate on Machines Running Trusted Extensions (7183625)
Problem Summary: If the Trusted Extensions feature of Oracle Solaris software is enabled before
the Oracle Solaris Cluster software is installed and configured, the Oracle Solaris Cluster
setup procedures are unable to copy the common agent container security keys from
one node to other nodes of the cluster. Identical copies of the security
keys on all cluster nodes is a requirement for the container to function
properly on cluster nodes.
Workaround: Manually copy the security keys from one global-cluster node to all other
nodes of the global cluster.
On each node, stop the security file agent.phys-schost# /usr/sbin/cacaoadm stop
On one node, change to the /etc/cacao/instances/default/ directory.phys-schost-1# cd /etc/cacao/instances/default/
Create a tar file of the /etc/cacao/instances/default/ directory.phys-schost-1# tar cf /tmp/SECURITY.tar security
Copy the /tmp/SECURITY.tar file to each of the other cluster nodes.
On each node to which you copied the /tmp/SECURITY.tar file, extract the security files.
Any security files that already exist in the /etc/cacao/instances/default/ directory are overwritten.phys-schost-2# cd /etc/cacao/instances/default/
phys-schost-2# tar xf /tmp/SECURITY.tar
Delete the /tmp/SECURITY.tar file from each node in the cluster.
Note - You must delete each copy of the tar file to avoid security risks.
phys-schost-1# rm /tmp/SECURITY.tar
phys-schost-2# rm /tmp/SECURITY.tar
On each node, restart the security file agent.phys-schost# /usr/sbin/cacaoadm start
The Command clnode remove -F nodename Fails to Remove the Node nodename From Solaris Volume Manager Device Groups (6471834)
Problem Summary: When a node is removed from the cluster by using the command
clnode remove -F nodename, a stale entry for the removed node might remain in Solaris Volume Manager
device groups.
Workaround: Remove the node from the Solaris Volume Manager device group by using
the metaset command before you run the clnode remove -F nodename command.
If you ran the clnode remove -F nodename command before you removed the node from the
Solaris Volume Manager device group, run the metaset command from an active cluster node
to remove the stale node entry from the Solaris Volume Manager device group.
Then run the clnode clear -F nodename command to completely remove all traces of the node
from the cluster.
Autodiscovery Should Find Only One Interconnect Path for Each Adapter (6299097)
Problem Summary: If there are redundant paths in the network hardware between interconnect adapters,
the scinstall utility might fail to configure the interconnect path between them.
Workaround: If autodiscovery discovers multiple interconnect paths, manually specify the adapter pairs for
each path.
Logical Hostname Failover Could Create Duplicate Addresses, Lead To Outage (7201091)
Problem Summary: For a shared-IP zone-cluster (ip-type=shared), if the underlying non-global zone of a
zone-cluster node is shut down by using the uadmin 1 0 or uadmin 2 0 command, the resulting
failover of LogicalHostname resources might result in duplicate IP addresses being configured on
a new primary node. The duplicate address is marked with the DUPLICATE flag until
five minutes later, during which time the address is not usable by the
application. See the
man page for more information about the DUPLICATE flag.
Workaround: Use either of the following methods:
Cleanly shut down the zone-cluster node from the global zone.# /usr/cluster/bin/clzonecluster -n nodename halt zone-cluster
Before you perform any shutdown action from within the zone-cluster node, evacuate all resource groups from the zone-cluster node.# /usr/cluster/bin/clresourcegroup evacuate -n zone-cluster-node +
sc_delegated_restarter Does Not Take Into Account Environment Variable Set in Manifest (7173159)
Problem Summary: Any environment variables that are specified in the service manifest are not
recognized when the service is put under SUNW.Proxy_SMF_failover resource type control.
Workaround: There is no workaround.
Unable to Re-enable Transport Interface After Disabling With ipadm disable-if -t interface (7141828)
Problem Summary: Cluster transport paths go offline with accidental use of the ipadm disable-if command
on the private transport interface.
Workaround: Disable and re-enable the cable that the disabled interface is connected to.
Determine the cable to which the interface is connected.# /usr/cluster/bin/clinterconnect show | grep Cable
Disable the cable for this interface on this node.# /usr/cluster/bin/clinterconnect disable cable
Re-enable the cable to bring the path online.# /usr/cluster/bin/clinterconnect enable cable
Failure of Logical Hostname to Fail Over Caused by getnetmaskbyaddr() (7075347)
Problem Summary: Logical hostname failover requires getting the netmask from the network if nisis
enabled for the netmasksname service. This call to getnetmaskbyaddr() hangs for a while
due to CR 7051511, which might hang long enough for the Resource Group
Manager (RGM) to put the resource in the FAILED state. This occurs even
though the correct netmask entries are in the /etc/netmasks local files. This
issue affects only multi-homed clusters, such as cluster nodes that reside on multiple
Workaround: Configure the /etc/nsswitch.conf file, which is handled by an SMF service, to
only use files for netmasks lookups.# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/netmask = astring:\"files\"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch
x86: scinstall -u update Sometimes Fails to Upgrade the Cluster Packages on an x86 Node (7201491)
Problem Summary: Running scinstall -u update on an x86 cluster node sometimes fails to upgrade the
cluster packages. The following error messages are reported:root@phys-schost-1:~# scinstall -u update
Calling "scinstall -u preupgrade"
Renamed "/.alt.s11u1_24a-2/etc/cluster/ccr" to "/.alt.s11u1_24a-2/etc/cluster/ccr.upgrade".
Log file - /.alt.s11u1_24a-2/var/cluster/logs/install/scinstall.upgrade.log.12037
** Upgrading software **
Startup: Linked image publisher check ... Done
Startup: Refreshing catalog 'aie' ... Done
Startup: Refreshing catalog 'solaris' ... Done
Startup: Refreshing catalog 'ha-cluster' ... Done
Startup: Refreshing catalog 'firstboot' ... Done
Startup: Checking that pkg(5) is up to date ... Done
Planning: Solver setup ... Done
Planning: Running solver ... Done
Planning: Finding local manifests ... Done
Planning: Fetching manifests:
0% complete
Planning: Fetching manifests: 26/26
100% complete
Planning: Package planning ... Done
Planning: Merging actions ... Done
Planning: Checking for conflicting actions ... Done
Planning: Consolidating action changes ... Done
Planning: Evaluating mediators ... Done
Planning: Planning completed in 16.30 seconds
Packages to update: 26
Planning: Linked images: 0/1 1 working: zone:OtherNetZC
pkg: update failed (linked image exception(s)):
A 'update' operation failed for child 'zone:OtherNetZC' with an unexpected
return value of 1 and generated the following output:
pkg: 3/4 catalogs successfully updated:
Framework stall:
URL: 'http://bea100.:24936/versions/0/'
Workaround: Before you run the scinstall -u update command, run pkg refresh --full.}

我要回帖

更多关于 ntc热敏电阻测温电路 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信