Compare commits
57 Commits
debian/2.0
...
master
Author | SHA1 | Date | |
---|---|---|---|
|
f601bb7245 | ||
|
59522eccab | ||
|
793d794ece | ||
|
ed807a844a | ||
|
fc9c175a69 | ||
|
dfd1ec6f73 | ||
|
482488129d | ||
|
ae9f83f8ec | ||
|
6108be93a8 | ||
|
6089a5c77b | ||
|
d2f9a836fe | ||
|
b036979fb1 | ||
|
20edac9498 | ||
|
7a4e78c789 | ||
|
1715c15630 | ||
|
34729fb0cd | ||
|
7d6b6a0bf6 | ||
|
f7739dfabb | ||
|
af2ce196be | ||
|
a1d10a84b1 | ||
|
a3538939fe | ||
|
41e2e2cd3f | ||
|
7b5ac349f1 | ||
|
619a7aeed5 | ||
|
c702884d9d | ||
|
8cb1593144 | ||
|
7ffe302bd9 | ||
|
d50441c28b | ||
|
d46fb9bd4d | ||
|
e535bfc085 | ||
|
5c8b81e483 | ||
|
5297531347 | ||
|
bada66e4ba | ||
|
86edc34ffe | ||
|
5d640cda61 | ||
|
312d3b442a | ||
|
591c42bdea | ||
|
05d15a653d | ||
|
3da0f823de | ||
|
032c5d9b01 | ||
|
485b02a84a | ||
|
1c03890432 | ||
|
f3bb733880 | ||
|
23e841ec13 | ||
|
76efdffd60 | ||
|
91082a0069 | ||
|
d94ad70c66 | ||
|
fbbfd982e0 | ||
|
b6103de05e | ||
|
8a95affcaf | ||
|
2d2294797d | ||
|
db7264ce1d | ||
|
0ff5bde24d | ||
|
5ffb793adc | ||
|
cbf018227a | ||
|
cdf1fbbde1 | ||
|
8a1888b715 |
BIN
amd64/2.04/arcconf
Executable file
BIN
amd64/2.04/arcconf
Executable file
Binary file not shown.
BIN
amd64/2.05/arcconf
Executable file
BIN
amd64/2.05/arcconf
Executable file
Binary file not shown.
BIN
amd64/2.06/arcconf
Executable file
BIN
amd64/2.06/arcconf
Executable file
Binary file not shown.
BIN
amd64/3.00/arcconf
Executable file
BIN
amd64/3.00/arcconf
Executable file
Binary file not shown.
BIN
amd64/3.01/arcconf
Executable file
BIN
amd64/3.01/arcconf
Executable file
Binary file not shown.
BIN
amd64/3.02/arcconf
Executable file
BIN
amd64/3.02/arcconf
Executable file
Binary file not shown.
BIN
amd64/3.03/arcconf
Executable file
BIN
amd64/3.03/arcconf
Executable file
Binary file not shown.
BIN
amd64/3.07/arcconf
Normal file
BIN
amd64/3.07/arcconf
Normal file
Binary file not shown.
BIN
amd64/4.23/arcconf
Normal file
BIN
amd64/4.23/arcconf
Normal file
Binary file not shown.
BIN
amd64/4.26/arcconf
Normal file
BIN
amd64/4.26/arcconf
Normal file
Binary file not shown.
BIN
amd64/4.30/arcconf
Executable file
BIN
amd64/4.30/arcconf
Executable file
Binary file not shown.
678
amd64/7.31/README.TXT
Normal file
678
amd64/7.31/README.TXT
Normal file
@ -0,0 +1,678 @@
|
||||
--------------------------------------------------------------------
|
||||
README.TXT
|
||||
|
||||
Adaptec Storage Manager (ASM)
|
||||
|
||||
as of May 7, 2012
|
||||
--------------------------------------------------------------------
|
||||
Please review this file for important information about issues
|
||||
and erratas that were discovered after completion of the standard
|
||||
product documentation. In the case of conflict between various
|
||||
parts of the documentation set, this file contains the most
|
||||
current information.
|
||||
|
||||
The following information is available in this file:
|
||||
|
||||
1. Software Versions and Documentation
|
||||
1.1 Adaptec Storage Manager
|
||||
1.2 Documentation
|
||||
2. Installation and Setup Notes
|
||||
2.1 Supported Operating Systems
|
||||
2.2 Minimum System Requirements
|
||||
2.3 General Setup Notes
|
||||
2.4 Linux Setup Notes
|
||||
2.5 Debian Linux Setup Notes
|
||||
3. General Cautions and Notes
|
||||
3.1 General Cautions
|
||||
3.2 General Notes
|
||||
4. Operating System-Specific Issues and Notes
|
||||
4.1 Windows - All
|
||||
4.2 Windows 64-Bit
|
||||
4.3 Linux
|
||||
4.4 Debian and Ubuntu
|
||||
4.5 FreeBSD
|
||||
4.6 Fedora and FreeBSD
|
||||
4.7 Linux and FreeBSD
|
||||
4.8 VMware
|
||||
5. RAID Level-Specific Notes
|
||||
5.1 RAID 1 and RAID 5 Notes
|
||||
5.2 RAID 10 Notes
|
||||
5.3 RAID x0 Notes
|
||||
5.4 RAID Volume Notes
|
||||
5.5 JBOD Notes
|
||||
5.6 Hybrid RAID Notes
|
||||
5.7 RAID-Level Migration Notes
|
||||
6. Power Management Issues and Notes
|
||||
7. "Call Home" Issues and Notes
|
||||
8. ARCCONF Issues and Notes
|
||||
9. Other Issues and Notes
|
||||
|
||||
--------------------------------------------------------------------
|
||||
1. Software Versions and Documentation
|
||||
|
||||
1.1. Adaptec Storage Manager Version 7.3.1, ARCCONF Version 7.3.1
|
||||
|
||||
1.2. Documentation on this DVD
|
||||
|
||||
PDF format*:
|
||||
|
||||
- Adaptec Storage Manager User's Guide
|
||||
- Adaptec RAID Controller Command Line Utility User's Guide
|
||||
|
||||
*Requires Adobe Acrobat Reader 4.0 or later
|
||||
|
||||
HTML and text format:
|
||||
|
||||
- Adaptec Storage Manager Online Help
|
||||
- Adaptec Storage Manager README.TXT file
|
||||
|
||||
--------------------------------------------------------------------
|
||||
2. Installation and Setup Notes
|
||||
|
||||
- The Adaptec Storage Manager User's Guide contains complete installation
|
||||
instructions for the Adaptec Storage Manager software. The Adaptec
|
||||
RAID Controllers Command Line Utility User's Guide contains
|
||||
complete installation instructions for ARCCONF, Remote ARCCONF,
|
||||
and the Adaptec CIM Provider. The Adaptec RAID Controllers
|
||||
Installation and User's Guide contains complete installation
|
||||
instructions for Adaptec RAID controllers and drivers.
|
||||
|
||||
2.1 Supported Operating Systems
|
||||
|
||||
- Microsoft Windows*:
|
||||
|
||||
o Windows Server 2008, 32-bit and 64-bit
|
||||
o Windows Server 2008 R2, 64-bit
|
||||
o Windows SBS 2011, 32-bit and 64-bit
|
||||
o Windows Storage Server 2008 R2, 32-bit and 64-bit
|
||||
o Windows Storage Server 2011, 32-bit and 64-bit
|
||||
o Windows 7, 32-bit and 64-bit
|
||||
|
||||
*Out-of-box and current service pack
|
||||
|
||||
- Linux:
|
||||
|
||||
o Red Hat Enterprise Linux 5.7, 6.1, IA-32 and x64
|
||||
o SuSE Linux Enterprise Server 10, 11, IA-32 and x64
|
||||
o Debian Linux 5.0.7, 6.0 IA-32 and x64
|
||||
o Ubuntu Linux 10.10, 11.10, IA-32 and x64
|
||||
o Fedora Linux 14, 15, 16 IA-32 and x64
|
||||
o Cent OS 5.7, 6.2, IA-32 and x64
|
||||
o VMware ESXi 5.0, VMware ESX 4.1 Classic (Agent only)
|
||||
|
||||
- Solaris:
|
||||
|
||||
o Solaris 10,
|
||||
o Solaris 11 Express
|
||||
|
||||
- FreeBSD:
|
||||
|
||||
o FreeBSD 7.4, 8.2
|
||||
|
||||
2.2 Minimum System Requirements
|
||||
|
||||
o Pentium Compatible 1.2 GHz processor, or equivalent
|
||||
o 512 MB RAM
|
||||
o 135 MB hard disk drive space
|
||||
o Greater than 256 color video mode
|
||||
|
||||
2.3 General Setup Notes
|
||||
|
||||
- You can configure Adaptec Storage Manager settings on other
|
||||
servers exactly as they are configured on one server. To
|
||||
replicate the Adaptec Storage Manager Enterprise view tree
|
||||
and notification list, do the following:
|
||||
|
||||
1. Install Adaptec Storage Manager on one server.
|
||||
|
||||
2. Start Adaptec Storage Manager. Using the 'Add remote system'
|
||||
action, define the servers for your tree.
|
||||
|
||||
3. Open the Notification Manager. Using the 'Add system'
|
||||
action, define the notification list.
|
||||
|
||||
4. Exit Adaptec Storage Manager.
|
||||
|
||||
5. Copy the following files onto a diskette from the directory
|
||||
where the Adaptec Storage Manager is installed:
|
||||
|
||||
RaidMSys.ser --> to replicate the tree
|
||||
RaidNLst.ser --> to replicate the notification list
|
||||
RaidSMTP.ser --> to replicate the SMTP e-mail notification list
|
||||
RaidJob.ser --> to replicate the jobs in the Task Scheduler
|
||||
|
||||
6. Install Adaptec Storage Manager on the other servers.
|
||||
|
||||
7. Copy the files from the diskette into the directory where
|
||||
Adaptec Storage Manager is installed on the other servers.
|
||||
|
||||
2.4 Linux Setup Notes
|
||||
|
||||
- Because the RPM for Red Hat Enterprise Linux 5 is unsigned, the
|
||||
installer reports that the package is "Unsigned, Malicious Software".
|
||||
Ignore the message and continue the installation.
|
||||
|
||||
- To run Adaptec Storage Manager under Red Hat Enterprise Linux for
|
||||
x64, the Standard installation with "Compatibility Arch Support"
|
||||
is required.
|
||||
|
||||
- To install Adaptec Storage Manager on Red Hat Enterprise Linux,
|
||||
you must install two packages from the Red Hat installation CD:
|
||||
|
||||
o compat-libstdc++-7.3-2.96.122.i386.rpm
|
||||
o compat-libstdc++--devel-7.3-2.96.122.i386.rpm
|
||||
|
||||
NOTE: The version string in the file name may be different
|
||||
from above. Be sure to check the version string on the
|
||||
Red Hat CD.
|
||||
|
||||
For example, type:
|
||||
|
||||
rpm --install /mnt/compat-libstdc++-7.3-2.96.122.i386.rpm
|
||||
|
||||
where mnt is the mount point of the CD-ROM drive.
|
||||
|
||||
- To install Adaptec Storage Manager on Red Hat Enterprise Linux 5,
|
||||
you must install one of these packages from the Red Hat
|
||||
installation CD:
|
||||
|
||||
o libXp-1.0.0-8.i386.rpm (32-Bit)
|
||||
o libXp-1.0.0-8.x86.rpm (64-Bit)
|
||||
|
||||
- To install Adaptec Storage Manager on SuSE Linux Enterprise
|
||||
Desktop 9, Service Pack 1, for 64-bit systems, you must install
|
||||
two packages from the SuSE Linux installation CD:
|
||||
|
||||
- liblcms-devel-1.12-55.2.x86_64.rpm
|
||||
- compat-32bit-9-200502081830.x86_64.rpm
|
||||
|
||||
NOTE: The version string in the file name may be different
|
||||
from above. Be sure to check the version string on the
|
||||
installation CD.
|
||||
|
||||
- To enable ASM's hard drive firmware update feature on RHEL 64-bit
|
||||
systems, you must ensure that the "sg" module is loaded in the
|
||||
kernel. To load the module manually (if it is not loaded already),
|
||||
use the command "modprobe sg".
|
||||
|
||||
2.5 Debian Linux Setup Notes
|
||||
|
||||
- You can use the ASM GUI on Debian Linux 5.x only if you install
|
||||
the GNOME desktop. Due to a compatibility issue with X11, the
|
||||
default KDE desktop is not supported in this release.
|
||||
|
||||
- To ensure that the ASM Agent starts automatically when Debian
|
||||
is rebooted, you must update the default start and stop values
|
||||
in /etc/init.d/stor_agent, as follows:
|
||||
|
||||
·[Original]
|
||||
# Default-Start: 2 3 5
|
||||
# Default-Stop: 0 1 2 6
|
||||
|
||||
·[Modification]
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop: 0 1 6
|
||||
|
||||
To activate the changes, execute 'insserv stor_agent', as root.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
3. Adaptec Storage Manager General Cautions and Notes
|
||||
|
||||
3.1 General Cautions
|
||||
|
||||
- This release supports a maximum of 8 concurrent online capacity
|
||||
expansion (OCE) tasks in the RAID array migration wizard.
|
||||
|
||||
- While building or clearing a logical drive, do not remove and
|
||||
re-insert any drive from that logical drive. Doing so may cause
|
||||
unpredictable results.
|
||||
|
||||
- Do not move disks comprising a logical drive from one controller
|
||||
to another while the power is on. Doing so could cause the loss of
|
||||
the logical drive configuration or data, or both. Instead, power
|
||||
off both affected controllers, move the drives, and then restart.
|
||||
|
||||
- When using Adaptec Storage Manager and the CLI concurrently,
|
||||
configuration changes may not appear in the Adaptec Storage
|
||||
Manager GUI until you refresh the display (by pressing F5).
|
||||
|
||||
3.2 General Notes
|
||||
|
||||
- Adaptec Storage Manager requires the following range of ports
|
||||
to be open for remote access: 34570-34580 (TCP), 34570 (UDP),
|
||||
34577-34580 (UDP).
|
||||
|
||||
- Adaptec Storage Manager generates log files automatically to
|
||||
assist in tracking system activity. The log files are
|
||||
created in the directory where Adaptec Storage Manager is
|
||||
installed.
|
||||
|
||||
o RaidEvt.log - Contains the information reported in
|
||||
Adaptec Storage Manager event viewer for all
|
||||
local and remote systems.
|
||||
|
||||
o RaidEvtA.log - Contains the information reported in
|
||||
Adaptec Storage Manager event viewer for the
|
||||
local system.
|
||||
|
||||
o RaidNot.log - Contains the information reported in the
|
||||
Notification Manager event viewer.
|
||||
|
||||
o RaidErr.log - Contains Java messages generated by
|
||||
Adaptec Storage Manager.
|
||||
|
||||
o RaidErrA.log - Contains Java messages generated by the
|
||||
Adaptec Storage Manager agent.
|
||||
|
||||
o RaidCall.log - Contains the information reported when
|
||||
statistics logging is enabled in ASM.
|
||||
|
||||
Information written to these files is appended to the existing
|
||||
files to maintain a history. However, when an error log file
|
||||
reaches a size of 5 Mbytes, it is copied to a new file with
|
||||
the extension .1 and the original (that is, the .LOG file) is
|
||||
deleted and recreated. For other log files, a .1 file is created
|
||||
when the .LOG file reaches a size of 1 Mbyte. If a .1 file already
|
||||
exists, the existing .1 file is destroyed.
|
||||
|
||||
- In the Event viewer, Adaptec Storage Manager reports both the
|
||||
initial build task for a logical drive and a subsequent Verify/Fix
|
||||
as a "Build/Verify" task.
|
||||
|
||||
- When displaying information about a physical device, the device,
|
||||
vendor and model information may be displayed incorrectly.
|
||||
|
||||
- After using a hot spare to successfully rebuild a redundant
|
||||
logical drive, Adaptec Storage Manager will continue to
|
||||
show the drive as a global hot spare. To remove the hot spare
|
||||
designation, delete it in Adaptec Storage Manager.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
4. Operating System-Specific Issues and Notes
|
||||
|
||||
4.1 Windows - All
|
||||
|
||||
- The Java Virtual Machine has a problem with the 256-color
|
||||
palette. (The Adaptec Storage Manager display may be distorted
|
||||
or hard to read.) Set the Display Properties Settings to a
|
||||
color mode with greater than 256 colors.
|
||||
|
||||
- When you shut down Windows, you might see the message
|
||||
"unexpected shutdown". Windows displays this message if the
|
||||
Adaptec Storage Manager Agent fails to exit within 3 seconds.
|
||||
It has no effect on file I/O or other system operations and can
|
||||
be ignored.
|
||||
|
||||
4.2 Windows 64-Bit
|
||||
|
||||
- Adaptec RAID controllers do not produce an audible alarm on the
|
||||
following 64-bit Windows operating systems:
|
||||
|
||||
o Windows Server 2003 x64 Edition (all versions)
|
||||
|
||||
4.3 Linux
|
||||
|
||||
- When you delete a logical drive, the operating system can no longer
|
||||
see the last logical drive. WORKAROUND: To allow Linux to see the
|
||||
last logical drive, restart your system.
|
||||
|
||||
- The controller does not support attached CD drives during OS
|
||||
installation.
|
||||
|
||||
- On certain versions of Linux, you may see messages concerning font
|
||||
conversion errors. Font configuration under X-Windows is a known
|
||||
JVM problem. It does not affect the proper operation of the
|
||||
Adaptec Storage Manager software. To suppress these messages,
|
||||
add the following line to your .Xdefaults file:
|
||||
|
||||
stringConversionWarnings: False
|
||||
|
||||
4.4 Debian and Ubuntu
|
||||
|
||||
- To create logical drives on Debian and Ubuntu installations, you
|
||||
must log in as root. It is not sufficient to start ASM with the
|
||||
'sudo /usr/StorMan/StorMan.sh' command (when not logged in as
|
||||
root). WORKAROUND: To create logical drives on Ubuntu when not
|
||||
logged in as root, install the package 'sudo dpkg -i storm_6.50-15645_amd64.deb'.
|
||||
|
||||
4.5 FreeBSD
|
||||
|
||||
- On FreeBSD systems, JBOD disks created with Adaptec Storage Manager
|
||||
are not immediately available to the OS. You must reboot the
|
||||
system before you can use the JBOD.
|
||||
|
||||
4.6 Fedora and FreeBSD
|
||||
|
||||
- Due to an issue with the Java JDialog Swing class, the 'Close'
|
||||
button may not appear on some Adaptec Storage Manager windows
|
||||
or dialog boxes under FreeBSD or Fedora Linux 15 or higher.
|
||||
WORKAROUND: Press ALT+F4 or right-click on the title bar, then
|
||||
close the dialog box from the pop-up menu.
|
||||
|
||||
4.7 Linux and FreeBSD
|
||||
|
||||
- If you cannot connect to a local or remote Adaptec Storage Manager
|
||||
installed on a Linux or FreeBSD system, verify that the TCP/IP hosts
|
||||
file is configured properly.
|
||||
|
||||
1. Open the /etc/hosts file.
|
||||
|
||||
NOTE: The following is an example:
|
||||
|
||||
127.0.0.1 localhost.localdomain localhost matrix
|
||||
|
||||
2. If the hostname of the system is identified on the line
|
||||
with 127.0.0.1, you must create a new host line.
|
||||
|
||||
3. Remove the hostname from the 127.0.0.1 line.
|
||||
|
||||
NOTE: The following is an example:
|
||||
|
||||
127.0.0.1 localhost.localdomain localhost
|
||||
|
||||
4. On a new line, type the IP address of the system.
|
||||
|
||||
5. Using the Tab key, tab to the second column and enter the
|
||||
fully qualified hostname.
|
||||
|
||||
6. Using the Tab key, tab to the third column and enter the
|
||||
nickname for the system.
|
||||
|
||||
NOTE: The following is an example of a completed line:
|
||||
|
||||
1.1.1.1 matrix.localdomain matrix
|
||||
|
||||
where 1.1.1.1 is the IP address of the server and
|
||||
matrix is the hostname of the server.
|
||||
|
||||
7. Restart the server for the changes to take effect.
|
||||
|
||||
4.8 VMware
|
||||
|
||||
- If you are unable to connect to VMware ESX Server from a
|
||||
remote ASM GUI, even though it appears in the Enterprise
|
||||
View as a remote system, most likely, some required ports
|
||||
are open and others are not. (The VMware ESX firewall blocks
|
||||
most ports, by default.) Check to make sure that all ports
|
||||
34570 thru 34581 are opened on the ESX server.
|
||||
|
||||
- After making array configuration changes in VMware, you must
|
||||
run the "esxcfg-rescan" tool manually at the VMware console
|
||||
to notify the operating system of the new target characteristics
|
||||
and/or availability. Alternatively, you can rescan from the
|
||||
Virtual Infrastructure Client: click on the host in the left
|
||||
panel, select the Configuration tab, choose "Storage Adapters",
|
||||
then, on the right side of the screen, click "Rescan".
|
||||
|
||||
- With VMware ESX 4.1, the OS command 'esxcfg-scsidevs -a'
|
||||
incorrectly identifies the Adaptec ASR-5445 controller as
|
||||
"Adaptec ASR5800". (ASM itself identifies the controller
|
||||
correctly.) To verify the controller name at the OS level,
|
||||
use this command to check the /proc file system:
|
||||
|
||||
# cat /proc/scsi/aacraid/<Node #>
|
||||
|
||||
where <Node #> is 1, 2, 3 etc.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
5. RAID Level-Specific Notes
|
||||
|
||||
5.1 RAID 1 and RAID 5 Notes
|
||||
|
||||
- During a logical device migration from RAID 1 or RAID 5 to
|
||||
RAID 0, if the original logical drive had a spare drive
|
||||
attached, the resulting RAID 0 retains the spare drive.
|
||||
Since RAID 0 is not redundant, you can remove the hot spare.
|
||||
|
||||
5.2 RAID 10 Notes
|
||||
|
||||
- If you force online a failed RAID 10, ASM erroneously shows two
|
||||
drives rebuilding (the two underlying member drives), not one.
|
||||
|
||||
- You cannot change the priority of a RAID 10 verify. Setting
|
||||
the priority at the start of a verify has no effect. The
|
||||
priority is still shown as high. Changing the priority of
|
||||
a running verify on a RAID-10 changes the displayed priority
|
||||
until a rescan is done, then the priority shows as high again.
|
||||
|
||||
- Performing a Verify or Verify/Fix on an RAID 10 displays the
|
||||
same message text in the event log: "Build/Verify started on
|
||||
second level logical drive of 'LogicalDrive_0.'" You may see the
|
||||
message three times for a Verify, but only once for a Verify/Fix.
|
||||
|
||||
5.3 RAID x0 Notes
|
||||
|
||||
- To create a RAID x0 with an odd number of drives (15, 25, etc),
|
||||
specify an odd number of second-level devices in the Advanced
|
||||
settings for the array. For a 25 drive RAID 50, for instance,
|
||||
the default is 24 drives.
|
||||
|
||||
NOTE: This differs from the BIOS utility, which creates RAID x0
|
||||
arrays with an odd number of drives by default.
|
||||
|
||||
- After building or verifying a leg of a second-level logical drive,
|
||||
the status of the second-level logical drive is displayed as a
|
||||
"Quick Initialized" drive.
|
||||
|
||||
5.4 RAID Volume Notes
|
||||
|
||||
- In ASM, a failed RAID Volume comprised of two RAID 1 logical
|
||||
drives is erroneously reported as a failed RAID 10. A failed
|
||||
RAID Volume comprised of two RAID 5 logical drives is
|
||||
erroneously reported as a failed RAID 50.
|
||||
|
||||
5.5 JBOD Notes
|
||||
|
||||
- In this release, ASM deletes partitioned JBODs without issuing
|
||||
a warning message.
|
||||
|
||||
- When migrating a JBOD to a Simple Volume, the disk must be quiescent
|
||||
(no I/O load). Otherwise, the migration will fail with an I/O Read
|
||||
error.
|
||||
|
||||
5.6 Hybrid RAID Notes
|
||||
|
||||
- ASM supports Hybrid RAID 1 and RAID 10 logical drives comprised
|
||||
of hard disk drives and Solid State Drives (SSDs). For a Hybrid
|
||||
RAID 10, you must select an equal number of SSDs and HDDs in
|
||||
“every other drive” order, that is: SSD—HDD—SSD—HDD, and so on.
|
||||
Failure to select drives in this order creates a standard
|
||||
logical drive that does not take advantage of SSD performance.
|
||||
|
||||
5.7 RAID-Level Migration (RLM) Notes
|
||||
|
||||
- We strongly recommend that you use the default 256KB stripe
|
||||
size for all RAID-level migrations. Choosing a different stripe
|
||||
size may crash the system.
|
||||
|
||||
- If a disk error occurs when migrating a 2TB RAID 0 to RAID 5
|
||||
(eg, bad blocks), ASM displays a message that the RAID 5 drive
|
||||
is reconfiguring even though the migration failed and no
|
||||
RAID-level migration task is running. To recreate the
|
||||
logical drive, fix or replace the bad disk, delete the RAID 5
|
||||
in ASM, then try again.
|
||||
|
||||
- When migrating a RAID 5EE, be careful not to remove and re-insert
|
||||
a drive in the array. If you do, the drive will not be included
|
||||
when the array is rebuilt. The migration will stop and the drive
|
||||
will be reported as Ready (not part of array).
|
||||
|
||||
NOTE: We strongly recommend that you do not remove and re-insert
|
||||
any drive during a RAID-level migration.
|
||||
|
||||
- When migrating a RAID 6 to a RAID 5, the migration will fail if
|
||||
the (physical) drive order on the target logical device differs
|
||||
from the source; for instance, migrating a four-drive RAID 6 to
|
||||
a three-drive RAID 5.
|
||||
|
||||
- Migrating a RAID 5 with greater than 2TB capacity to RAID 6 or
|
||||
RAID 10 is not supported in this release. Doing so may crash
|
||||
the system.
|
||||
|
||||
- When migrating from a RAID 0 to any redundant logical drive,
|
||||
like RAID 5 or 10, Adaptec Storage Manager shows the status as
|
||||
"Degraded Reconfiguring" for a moment, then the status changes
|
||||
to "Reconfiguring." The "Degraded" status does not appear in
|
||||
the event log.
|
||||
|
||||
- The following RAID-level migrations and online capacity
|
||||
expansions (OCE) are NOT supported:
|
||||
|
||||
o RAID 50 to RAID 5 RLM
|
||||
o RAID 60 to RAID 6 RLM
|
||||
o RAID 50 to RAID 60 OCE
|
||||
|
||||
- During a RAID-level migration, ASM and the BIOS utility show
|
||||
different RAID levels while the migration is in progress. ASM shows
|
||||
the target RAID level; the BIOS utility shows the current RAID level.
|
||||
|
||||
- If a disk error occurs during a RAID-level migration (eg, bad blocks),
|
||||
the exception is reported in the ASM event viewer (bottom pane)
|
||||
and in the support archive file (Support.zip, Controller 1 logs.txt),
|
||||
but not in the main ASM Event Log file, RaidEvtA.log.
|
||||
|
||||
- Always allow a RAID-level migration to complete before gathering
|
||||
support archive information in Support.zip. Otherwise, the Support.zip
|
||||
file will include incorrect partition information. Once the RLM is
|
||||
complete, the partition information will be reported correctly.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
6. Power Management Issues and Notes
|
||||
|
||||
- You must use a compatible combination of Adaptec Storage Manager
|
||||
and controller firmware and driver software to use the power
|
||||
management feature. All software components must support power
|
||||
management. You can download the latest controller firmware
|
||||
and drivers from the Adaptec Web site at www.adaptec.com.
|
||||
|
||||
- Power management is not supported under FreeBSD.
|
||||
|
||||
- Power management settings apply only to logical drives in the
|
||||
Optimal state. If you change the power settings on a Failed
|
||||
logical drive, then force the drive online, the previous
|
||||
settings are reinstated.
|
||||
|
||||
- After setting power values for a logical drive in ARCCONF, the
|
||||
settings are not updated in the Adaptec Storage Manager GUI.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
7. "Call Home" Issues and Notes
|
||||
|
||||
- The Call Home feature is not supported in this release. To gather
|
||||
statistics about your system for remote analysis, enable statistics
|
||||
logging in ASM, then create a Call Home Support Archive. For more
|
||||
information, see the user's guide.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
8. ARCCONF Issues and Notes
|
||||
|
||||
- With VMware ESX 4.1, you cannot delete a logical drive
|
||||
with ARCCONF. WORKAROUND: Connect to the VMware machine from a
|
||||
remote ASM GUI, then delete the logical drive.
|
||||
|
||||
- With Linux kernel versions 2.4 and 2.6, the ARCCONF
|
||||
DELETE <logical_drive> command may fail with a Kernel Oops
|
||||
error message. Even though the drives are removed from the
|
||||
Adaptec Storage Manager GUI, they may not really be deleted.
|
||||
Reboot the controller; then, issue the ARCCONF DELETE command
|
||||
again.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
9. Other Issues and Notes
|
||||
|
||||
- Some solid state drives identify themselves as ROTATING media.
|
||||
As a result, these SSDs:
|
||||
|
||||
o Appear as SATA drives in the ASM Physical Devices View
|
||||
o Cannot be used as Adaptec maxCache devices
|
||||
o Cannot be used within a hybrid RAID array (comprised of
|
||||
SSDs and hard disks)
|
||||
|
||||
- The blink pattern on Adaptec Series 6/6Q/6E/6T controllers differs
|
||||
from Series 2 and Series 5 controllers:
|
||||
|
||||
o When blinking drives in ASM, the red LED goes on and stays solid;
|
||||
on Series 2 and 5 controllers, the LED blinks on and off.
|
||||
|
||||
o When failing drives in ASM (using the 'Set drive state to failed'
|
||||
action), the LED remains off; on Series 2 and 5 controllers, the
|
||||
LED goes on and remains solid.
|
||||
|
||||
- Cache settings for RAID Volumes (Read cache, Write cache, maxCache)
|
||||
have no effect. The cache settings for the underlying logical
|
||||
devices take priority.
|
||||
|
||||
- On rare occasions, ASM will report invalid medium error counts on
|
||||
a SATA hard drive or SSD. To correct the problem, use ARCCONF to
|
||||
clear the device counts. The command is:
|
||||
|
||||
arcconf getlogs <Controller_ID> DEVICE clear
|
||||
|
||||
- On rare occasions, ASM lists direct-attached hard drives and SSDs
|
||||
as drives in a virtual SGPIO enclosure. Normally, the drives are
|
||||
listed in the Physical Devices View under ports CN0 and CN1.
|
||||
|
||||
- Hard Drive Firmware Update Wizard:
|
||||
|
||||
o Firmware upgrade on Western Digital WD5002ABYS-01B1B0 hard drives
|
||||
is not supported for packet sizes below 2K (512/1024).
|
||||
|
||||
o After flashing the firmware of a Seagate Barracuda ES ST3750640NS
|
||||
hard drive, you MUST cycle the power before ASM will show the new
|
||||
image. You can pull out and re-insert the drive; power cycle the
|
||||
enclosure; or power cycle the system if the drive is attached directly.
|
||||
|
||||
- Secure Erase:
|
||||
|
||||
o If you reboot the system while a Secure Erase operation is in
|
||||
progress, the affected drive may not be displayed in Adaptec
|
||||
Storage Manager or other Adaptec utilities, such as the ACU.
|
||||
|
||||
o You can perform a Secure Erase on a Solid State Drive (SSD) to
|
||||
remove the metadata. However, the drive will move to the Failed
|
||||
state when you reboot the system. To use the SSD, reboot to
|
||||
the BIOS, then initialize the SSD. After initialization, the SSD
|
||||
will return to the Ready state. (A SSD in the Failed state cannot
|
||||
be initialized in ASM.)
|
||||
|
||||
- The Repair option in the ASM Setup program may fail to fix a
|
||||
corrupted installation, depending on which files are affected.
|
||||
The repair operation completes successfully, but the software
|
||||
remains unfixed.
|
||||
|
||||
- Adaptec Storage Manager may fail to exit properly when you create
|
||||
64 logical devices in the wizard. The logical devices are still
|
||||
created, however.
|
||||
|
||||
- The "Clear logs on all controllers" action does not clear events
|
||||
in the ASM Event Viewer (GUI). It clears device events, defunct
|
||||
drive events, and controller events in the controllers' log files.
|
||||
To clear events in the lower pane of the GUI, select Clear
|
||||
configuration event viewer from the File menu.
|
||||
|
||||
- Stripe Size Limits for Large Logical Drives:
|
||||
|
||||
The stripe size limit for logical drives with more than 8 hard
|
||||
drives is 512KB; for logical drives with more than 16 hard
|
||||
drives it is 256KB.
|
||||
|
||||
- Agent Crashes when Hot-Plugging an Enclosure:
|
||||
|
||||
With one or more logical drives on an enclosure, removing
|
||||
the enclosure cable from the controller side may crash
|
||||
the ASM Agent.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
(c) 2012 PMC-Sierra, Inc. All Rights Reserved.
|
||||
|
||||
This software is protected under international copyright laws and
|
||||
treaties. It may only be used in accordance with the terms
|
||||
of its accompanying license agreement.
|
||||
|
||||
The information in this document is proprietary and confidential to
|
||||
PMC-Sierra, Inc., and for its customers' internal use. In any event,
|
||||
no part of this document may be reproduced or redistributed in any
|
||||
form without the express written consent of PMC-Sierra, Inc.,
|
||||
1380 Bordeaux Drive, Sunnyvale, CA 94089.
|
||||
|
||||
P/N DOC-01700-02-A Rev A
|
BIN
amd64/7.31/arcconf
Executable file
BIN
amd64/7.31/arcconf
Executable file
Binary file not shown.
BIN
amd64/7.31/libstdc++.so.5
Executable file
BIN
amd64/7.31/libstdc++.so.5
Executable file
Binary file not shown.
@ -14,7 +14,7 @@ function cleanup {
|
||||
}
|
||||
|
||||
# register the cleanup function to be called on the EXIT signal
|
||||
trap cleanup EXIT
|
||||
#trap cleanup EXIT
|
||||
|
||||
# Download Files specified in files.diz
|
||||
while IFS=! read type app version outputfile url md5 realver
|
||||
@ -42,24 +42,36 @@ do
|
||||
mkdir -p $WORK_DIR/${type}/${realver}
|
||||
pushd $WORK_DIR/${type}/${realver}
|
||||
unzip $DIR/../$outputfile
|
||||
tar -zxf $DIR/../$outputfile
|
||||
rpm2cpio $DIR/../$outputfile | cpio -idmv
|
||||
rpm2cpio manager/*.x86_64.rpm | cpio -idmv
|
||||
rpm2cpio manager/*.i386.rpm | cpio -idmv
|
||||
popd
|
||||
mkdir -p $WORK_DIR/${app}-${version}/${type}/${realver}
|
||||
pushd $WORK_DIR/${app}-${version}/${type}/${realver}
|
||||
case "${type}" in
|
||||
"amd64" )
|
||||
mv $WORK_DIR/${type}/${realver}/linux_x64/cmdline/arcconf .
|
||||
mv $WORK_DIR/${type}/${realver}/linux_x64/arcconf .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/lib64/lib* .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/arcconf .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/libstdc* .
|
||||
chmod +x arcconf
|
||||
dos2unix $WORK_DIR/${type}/${realver}/linux_x64/cmdline/*.{txt,TXT}
|
||||
mv $WORK_DIR/${type}/${realver}/linux_x64/cmdline/*.{txt,TXT} .
|
||||
dos2unix $WORK_DIR/${type}/${realver}/usr/StorMan/*.{txt,TXT}
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/*.{txt,TXT} .
|
||||
;;
|
||||
"i386" )
|
||||
mv $WORK_DIR/${type}/${realver}/linux/cmdline/arcconf .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/lib/lib* .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/arcconf .
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/libstdc* .
|
||||
chmod +x arcconf
|
||||
dos2unix $WORK_DIR/${type}/${realver}/linux/cmdline/*.{txt,TXT}
|
||||
mv $WORK_DIR/${type}/${realver}/linux/cmdline/*.{txt,TXT} .
|
||||
dos2unix $WORK_DIR/${type}/${realver}/usr/StorMan/*.{txt,TXT}
|
||||
mv $WORK_DIR/${type}/${realver}/usr/StorMan/*.{txt,TXT} .
|
||||
;;
|
||||
*)
|
||||
echo "Wrong arch"
|
||||
@ -82,7 +94,7 @@ popd
|
||||
|
||||
VER=`cat $WORK_DIR/version.txt`
|
||||
echo "Importing $DIR/../$FILENAME as $VER into git"
|
||||
#exit 1
|
||||
exit 1
|
||||
|
||||
cleanup
|
||||
gbp import-orig --pristine-tar -u $VER $DIR/../$FILENAME
|
||||
|
4
debian/arcconf.wrapper
vendored
4
debian/arcconf.wrapper
vendored
@ -1,4 +0,0 @@
|
||||
#!/bin/sh
|
||||
LD_LIBRARY_PATH=/usr/lib/arcconf:$LD_LIBRARY_PATH
|
||||
export LD_LIBRARY_PATH
|
||||
exec /usr/lib/arcconf/arcconf $@
|
5
debian/arcconf.wrapper32
vendored
Normal file
5
debian/arcconf.wrapper32
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
LD_LIBRARY_PATH=/usr/lib/arcconf:$LD_LIBRARY_PATH
|
||||
export LD_LIBRARY_PATH
|
||||
#exec /usr/bin/linux32 --uname-2.6 /usr/lib/arcconf/arcconf $@
|
||||
exec /usr/lib/arcconf/arcconf $@
|
5
debian/arcconf.wrapper64
vendored
Normal file
5
debian/arcconf.wrapper64
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
#!/bin/sh
|
||||
LD_LIBRARY_PATH=/usr/lib/arcconf:$LD_LIBRARY_PATH
|
||||
export LD_LIBRARY_PATH
|
||||
#exec /usr/bin/linux64 --uname-2.6 /usr/lib/arcconf/arcconf $@
|
||||
exec /usr/lib/arcconf/arcconf $@
|
22
debian/changelog
vendored
22
debian/changelog
vendored
@ -1,9 +1,27 @@
|
||||
arcconf (2.03.2-1.1) UNRELEASED; urgency=medium
|
||||
arcconf (4.26.00.27449) UNRELEASED; urgency=medium
|
||||
|
||||
[ Mario Fetka ]
|
||||
* new integrated version
|
||||
* add old 7.31 version of arcconf for compatiility
|
||||
* add old version 4.30 for compatibility
|
||||
* Bump to 2.06
|
||||
* Bump to 3.00
|
||||
* add missing 3.00 to package
|
||||
* Bump to 3.01
|
||||
* Bump to 3.02
|
||||
* Bump to 3.03
|
||||
|
||||
[ root ]
|
||||
* Bump to 4.26.00.27449
|
||||
|
||||
-- root <mario.fetka@disconnected-by-peer.at> Tue, 12 Aug 2025 21:08:11 +0200
|
||||
|
||||
arcconf (2.05.1) UNRELEASED; urgency=medium
|
||||
|
||||
* Non-maintainer upload.
|
||||
* new integrated version
|
||||
|
||||
-- Mario Fetka <mario.fetka@gmail.com> Tue, 25 Apr 2017 04:03:29 +0200
|
||||
-- Mario Fetka <mario.fetka@gmail.com> Tue, 31 Oct 2017 17:14:13 +0100
|
||||
|
||||
arcconf (2.02.22404-1) unstable; urgency=low
|
||||
|
||||
|
2
debian/compat
vendored
2
debian/compat
vendored
@ -1 +1 @@
|
||||
5
|
||||
7
|
||||
|
2
debian/control
vendored
2
debian/control
vendored
@ -2,7 +2,7 @@ Source: arcconf
|
||||
Section: admin
|
||||
Priority: optional
|
||||
Maintainer: Adam Cécile (Le_Vert) <gandalf@le-vert.net>
|
||||
Build-Depends: debhelper (>= 5)
|
||||
Build-Depends: debhelper (>= 5), libc6-i386 | atop
|
||||
Standards-Version: 3.9.3
|
||||
|
||||
Package: arcconf
|
||||
|
46
debian/control.in
vendored
Normal file
46
debian/control.in
vendored
Normal file
@ -0,0 +1,46 @@
|
||||
Source: arcconf
|
||||
Section: admin
|
||||
Priority: optional
|
||||
Maintainer: Adam Cécile (Le_Vert) <gandalf@le-vert.net>
|
||||
Build-Depends: debhelper (>= 5), @BUILD_DEPENDS@
|
||||
Standards-Version: 3.9.3
|
||||
|
||||
Package: arcconf
|
||||
Architecture: amd64 i386
|
||||
Depends: ${misc:Depends}, ${shlibs:Depends}
|
||||
Provides: arcconf-1.05, arcconf-1.07, arcconf-1.08, arcconf-2.02
|
||||
Conflicts: arcconf-1.05, arcconf-1.07, arcconf-1.08, arcconf-2.02
|
||||
Replaces: arcconf-1.05, arcconf-1.07, arcconf-1.08, arcconf-2.02
|
||||
Description: Adaptec ARCCONF command line tool
|
||||
Compatible Products:
|
||||
.
|
||||
Adaptec RAID 6405E
|
||||
Adaptec RAID 6405T
|
||||
Adaptec RAID 6445
|
||||
Adaptec RAID 6805
|
||||
Adaptec RAID 6805E
|
||||
Adaptec RAID 6805Q
|
||||
Adaptec RAID 6805T
|
||||
Adaptec RAID 6805TQ
|
||||
Adaptec RAID 7805
|
||||
Adaptec RAID 7805Q
|
||||
Adaptec RAID 71605
|
||||
Adaptec RAID 71605E
|
||||
Adaptec RAID 71605Q
|
||||
Adaptec RAID 71685
|
||||
Adaptec RAID 72405
|
||||
Adaptec RAID 78165
|
||||
Adaptec RAID 8405
|
||||
Adaptec RAID 8405E
|
||||
Adaptec RAID 8805
|
||||
Adaptec RAID 8805E
|
||||
Adaptec RAID 8885
|
||||
Adaptec RAID 8885Q
|
||||
Adaptec RAID 81605Z
|
||||
Adaptec RAID 81605ZQ
|
||||
PMC Adaptec HBA 1000-8e
|
||||
PMC Adaptec HBA 1000-8i
|
||||
PMC Adaptec HBA 1000-8i8e
|
||||
PMC Adaptec HBA 1000-16e
|
||||
PMC Adaptec HBA 1000-16i
|
||||
Adaptec SAS Expander 82885T
|
36
debian/postinst
vendored
36
debian/postinst
vendored
@ -2,6 +2,36 @@
|
||||
|
||||
set -e
|
||||
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.26" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-4.26 426
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.23" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-4.23 423
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.07" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-3.07 307
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.03" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-3.03 303
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.02" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-3.02 302
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.01" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-3.01 301
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.00" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-3.00 300
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.06" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-2.06 206
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.05" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-2.05 205
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.04" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-2.04 204
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.03" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-2.03 203
|
||||
fi
|
||||
@ -29,6 +59,12 @@ fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-1.04" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-1.04 104
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-7.31" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-7.31 100
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.30" ]; then
|
||||
update-alternatives --install /usr/lib/arcconf/arcconf arcconf /usr/lib/arcconf/arcconf-4.30 90
|
||||
fi
|
||||
|
||||
#DEBHELPER#
|
||||
|
||||
|
36
debian/prerm
vendored
36
debian/prerm
vendored
@ -2,6 +2,36 @@
|
||||
|
||||
set -e
|
||||
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.26" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-4.26
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.23" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-4.23
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.07" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-3.07
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.03" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-3.03
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.02" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-3.02
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.01" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-3.01
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-3.00" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-3.00
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.06" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-2.06
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.05" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-2.05
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.04" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-2.04
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-2.03" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-2.03
|
||||
fi
|
||||
@ -29,6 +59,12 @@ fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-1.04" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-1.04
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-7.31" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-7.31
|
||||
fi
|
||||
if [ -f "/usr/lib/arcconf/arcconf-4.30" ]; then
|
||||
update-alternatives --remove arcconf /usr/lib/arcconf/arcconf-4.30
|
||||
fi
|
||||
|
||||
#DEBHELPER#
|
||||
|
||||
|
76
debian/rules
vendored
76
debian/rules
vendored
@ -3,6 +3,7 @@
|
||||
# Uncomment this to turn on verbose mode.
|
||||
#export DH_VERBOSE=1
|
||||
|
||||
|
||||
clean:
|
||||
dh_testdir
|
||||
dh_testroot
|
||||
@ -14,50 +15,111 @@ install:
|
||||
dh_clean -k
|
||||
dh_installdirs
|
||||
ifeq ($(DEB_BUILD_ARCH),amd64)
|
||||
install -D -m 0755 debian/arcconf.wrapper \
|
||||
install -D -m 0755 debian/arcconf.wrapper64 \
|
||||
debian/arcconf/usr/sbin/arcconf
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/centos5/libstdc++.so.6.0.8
|
||||
install -D -m 0755 amd64/centos5/libstdc++.so.6.0.8 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdc++.so.6.0.8
|
||||
ln -s libstdc++.so.6.0.8 debian/arcconf/usr/lib/arcconf/libstdc++.so.6
|
||||
debian/arcconf/usr/lib/arcconf/libstdcv4.so.6.0.8
|
||||
ln -s libstdcv4.so.6.0.8 debian/arcconf/usr/lib/arcconf/libstdcv4.so.6
|
||||
install -D -m 0755 amd64/4.26/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-4.26
|
||||
install -D -m 0755 amd64/4.23/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-4.23
|
||||
install -D -m 0755 amd64/3.07/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-3.07
|
||||
install -D -m 0755 amd64/3.03/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-3.03
|
||||
install -D -m 0755 amd64/3.02/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-3.02
|
||||
install -D -m 0755 amd64/3.01/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-3.01
|
||||
install -D -m 0755 amd64/3.00/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-3.00
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.06/arcconf
|
||||
install -D -m 0755 amd64/2.06/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.06
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.05/arcconf
|
||||
install -D -m 0755 amd64/2.05/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.05
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.04/arcconf
|
||||
install -D -m 0755 amd64/2.04/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.04
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.03/arcconf
|
||||
install -D -m 0755 amd64/2.03/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.03
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.02/arcconf
|
||||
install -D -m 0755 amd64/2.02/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.02
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.01/arcconf
|
||||
install -D -m 0755 amd64/2.01/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.01
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/2.00/arcconf
|
||||
install -D -m 0755 amd64/2.00/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.00
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/1.08/arcconf
|
||||
install -D -m 0755 amd64/1.08/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.08
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/1.07/arcconf
|
||||
install -D -m 0755 amd64/1.07/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.07
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/1.06/arcconf
|
||||
install -D -m 0755 amd64/1.06/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.06
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/1.05/arcconf
|
||||
install -D -m 0755 amd64/1.05/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.05
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" amd64/1.04/arcconf
|
||||
install -D -m 0755 amd64/1.04/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.04
|
||||
sed -i "s/libstdc++.so.5/libstdcv3.so.5/g" amd64/7.31/libstdc++.so.5
|
||||
install -D -m 0755 amd64/7.31/libstdc++.so.5 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdcv3.so.5
|
||||
sed -i "s/libstdc++.so.5/libstdcv3.so.5/g" amd64/7.31/arcconf
|
||||
install -D -m 0755 amd64/7.31/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-7.31
|
||||
install -D -m 0755 i386/4.30/libstdc++-libc6.2-2.so.3 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdc++-libc6.2-2.so.3
|
||||
install -D -m 0755 amd64/4.30/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-4.30
|
||||
endif
|
||||
ifeq ($(DEB_BUILD_ARCH),i386)
|
||||
install -D -m 0755 debian/arcconf.wrapper \
|
||||
install -D -m 0755 debian/arcconf.wrapper32 \
|
||||
debian/arcconf/usr/sbin/arcconf
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/centos5/libstdc++.so.6.0.8
|
||||
install -D -m 0755 i386/centos5/libstdc++.so.6.0.8 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdc++.so.6.0.8
|
||||
ln -s libstdc++.so.6.0.8 debian/arcconf/usr/lib/arcconf/libstdc++.so.6
|
||||
debian/arcconf/usr/lib/arcconf/libstdcv4.so.6.0.8
|
||||
ln -s libstdcv4.so.6.0.8 debian/arcconf/usr/lib/arcconf/libstdcv4.so.6
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/2.01/arcconf
|
||||
install -D -m 0755 i386/2.01/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.01
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/2.00/arcconf
|
||||
install -D -m 0755 i386/2.00/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-2.00
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/1.08/arcconf
|
||||
install -D -m 0755 i386/1.08/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.08
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/1.07/arcconf
|
||||
install -D -m 0755 i386/1.07/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.07
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/1.06/arcconf
|
||||
install -D -m 0755 i386/1.06/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.06
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/1.05/arcconf
|
||||
install -D -m 0755 i386/1.05/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.05
|
||||
sed -i "s/libstdc++.so.6/libstdcv4.so.6/g" i386/1.04/arcconf
|
||||
install -D -m 0755 i386/1.04/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-1.04
|
||||
sed -i "s/libstdc++.so.5/libstdcv3.so.5/g" i386/7.31/libstdc++.so.5
|
||||
install -D -m 0755 i386/7.31/libstdc++.so.5 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdcv3.so.5
|
||||
sed -i "s/libstdc++.so.5/libstdcv3.so.5/g" i386/7.31/arcconf
|
||||
install -D -m 0755 i386/7.31/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-7.31
|
||||
install -D -m 0755 i386/4.30/libstdc++-libc6.2-2.so.3 \
|
||||
debian/arcconf/usr/lib/arcconf/libstdc++-libc6.2-2.so.3
|
||||
install -D -m 0755 i386/4.30/arcconf \
|
||||
debian/arcconf/usr/lib/arcconf/arcconf-4.30
|
||||
endif
|
||||
dh_install
|
||||
|
||||
@ -73,7 +135,7 @@ binary-arch: build install
|
||||
dh_makeshlibs
|
||||
dh_strip
|
||||
dh_installdeb
|
||||
dh_shlibdeps
|
||||
LD_LIBRARY_PATH=$$(pwd)/debian/arcconf/usr/lib/arcconf:$${LD_LIBRARY_PATH} dh_shlibdeps
|
||||
dh_gencontrol
|
||||
dh_md5sums
|
||||
dh_builddeb
|
||||
|
50
files.diz
50
files.diz
@ -1,18 +1,32 @@
|
||||
amd64!arcconf!2.03.2!arcconf_v2_03_22476.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_03_22476.zip!e98c9f2fb11d368adc0378ddd9daad40!2.03
|
||||
amd64!arcconf!2.03.2!arcconf_v2_02_22404.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_02_22404.zip!373126b8d256aa76022906145a87d398!2.02
|
||||
amd64!arcconf!2.03.2!arcconf_v2_01_22270.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_01_22270.zip!a51110590c3439245e5179a9e35bad86!2.01
|
||||
i386!arcconf!2.03.2!arcconf_v2_01_22270.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_01_22270.zip!a51110590c3439245e5179a9e35bad86!2.01
|
||||
amd64!arcconf!2.03.2!arcconf_v2_00_21811.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_00_21811.zip!946ed3423254d893120ceb89f7779685!2.00
|
||||
i386!arcconf!2.03.2!arcconf_v2_00_21811.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_00_21811.zip!946ed3423254d893120ceb89f7779685!2.00
|
||||
amd64!arcconf!2.03.2!arcconf_v1_8_21375.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_8_21375.zip!7a697a7c8b99b66312116d4249ab1922!1.08
|
||||
i386!arcconf!2.03.2!arcconf_v1_8_21375.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_8_21375.zip!7a697a7c8b99b66312116d4249ab1922!1.08
|
||||
amd64!arcconf!2.03.2!arcconf_v1_7_21229.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_7_21229.zip!8d8e1829172bb72f69081b2ac6d2e50b!1.07
|
||||
i386!arcconf!2.03.2!arcconf_v1_7_21229.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_7_21229.zip!8d8e1829172bb72f69081b2ac6d2e50b!1.07
|
||||
amd64!arcconf!2.03.2!arcconf_v1_6_21062.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_6_21062.zip!32aa39da4ecca41c4cb987791f5aa656!1.06
|
||||
i386!arcconf!2.03.2!arcconf_v1_6_21062.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_6_21062.zip!32aa39da4ecca41c4cb987791f5aa656!1.06
|
||||
amd64!arcconf!2.03.2!arcconf_v1_5_20942.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_5_20942.zip!de7e676bdd9c04db8125d04086d9efd6!1.05
|
||||
i386!arcconf!2.03.2!arcconf_v1_5_20942.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_5_20942.zip!de7e676bdd9c04db8125d04086d9efd6!1.05
|
||||
amd64!arcconf!2.03.2!arcconf_v1_4_20859.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_4_20859.zip!6c3d72fe83ff76e68a70fa59d92ae5f7!1.04
|
||||
i386!arcconf!2.03.2!arcconf_v1_4_20859.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_4_20859.zip!6c3d72fe83ff76e68a70fa59d92ae5f7!1.04
|
||||
amd64!arcconf!2.03.2!libstdc++-4.1.2-55.el5.x86_64.rpm!http://vault.centos.org/5.11/os/x86_64/CentOS/libstdc++-4.1.2-55.el5.x86_64.rpm!ecbeb114ecf3a33848b7d4e1aefe65c2!centos5
|
||||
i386!arcconf!2.03.2!libstdc++-4.1.2-55.el5.i386.rpm!http://vault.centos.org/5.11/os/i386/CentOS/libstdc++-4.1.2-55.el5.i386.rpm!b615266b1ddae7f9a17601179f83c826!centos5
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v4.26.00.27449.zip!https://storage.microsemi.com/raid/storage_manager/arcconf_B27449.zip!ae5d30627921fde07243aa7b466e4a63!4.26
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v4.23.00.27147.zip!https://storage.microsemi.com/raid/storage_manager/arcconf_B27147.zip!0d16f2e12f5949c2003425084d2787e3!4.23
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v3_07_23980.zip!https://storage.microsemi.com/raid/storage_manager/arcconf_v3_07_23980.zip!3fb455a930a619a82141ec890b199eab!3.07
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v3_03_23668.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v3_03_23668.zip!b561a9db7b68695a10b489e63348a7ad!3.03
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v3_02_23600.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v3_02_23600.zip!63878768e5c80b787f1e83f28cd07c09!3.02
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v3_01_23531.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v3_01_23531.zip!8d8148719bda151c8fbbdb3426ea6e08!3.01
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v3_00_23488.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v3_00_23488.zip!df63edf89beba7b8e716db04994e5242!3.00
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_06_23167.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_06_23167.zip!f84e041ec412b1ccea4ab5e659a7709e!2.06
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_05_22932.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_05_22932.zip!51a03b5fe08b45f229b2c73127c5c6ae!2.05
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_04_22665.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_04_22665.zip!07153757100a62ece5551d8950297f00!2.04
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_03_22476.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_03_22476.zip!e98c9f2fb11d368adc0378ddd9daad40!2.03
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_02_22404.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_02_22404.zip!373126b8d256aa76022906145a87d398!2.02
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_01_22270.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_01_22270.zip!a51110590c3439245e5179a9e35bad86!2.01
|
||||
i386!arcconf!4.26.00.27449!arcconf_v2_01_22270.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_01_22270.zip!a51110590c3439245e5179a9e35bad86!2.01
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v2_00_21811.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_00_21811.zip!946ed3423254d893120ceb89f7779685!2.00
|
||||
i386!arcconf!4.26.00.27449!arcconf_v2_00_21811.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v2_00_21811.zip!946ed3423254d893120ceb89f7779685!2.00
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v1_8_21375.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_8_21375.zip!7a697a7c8b99b66312116d4249ab1922!1.08
|
||||
i386!arcconf!4.26.00.27449!arcconf_v1_8_21375.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_8_21375.zip!7a697a7c8b99b66312116d4249ab1922!1.08
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v1_7_21229.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_7_21229.zip!8d8e1829172bb72f69081b2ac6d2e50b!1.07
|
||||
i386!arcconf!4.26.00.27449!arcconf_v1_7_21229.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_7_21229.zip!8d8e1829172bb72f69081b2ac6d2e50b!1.07
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v1_6_21062.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_6_21062.zip!32aa39da4ecca41c4cb987791f5aa656!1.06
|
||||
i386!arcconf!4.26.00.27449!arcconf_v1_6_21062.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_6_21062.zip!32aa39da4ecca41c4cb987791f5aa656!1.06
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v1_5_20942.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_5_20942.zip!de7e676bdd9c04db8125d04086d9efd6!1.05
|
||||
i386!arcconf!4.26.00.27449!arcconf_v1_5_20942.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_5_20942.zip!de7e676bdd9c04db8125d04086d9efd6!1.05
|
||||
amd64!arcconf!4.26.00.27449!arcconf_v1_4_20859.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_4_20859.zip!6c3d72fe83ff76e68a70fa59d92ae5f7!1.04
|
||||
i386!arcconf!4.26.00.27449!arcconf_v1_4_20859.zip!http://download.adaptec.com/raid/storage_manager/arcconf_v1_4_20859.zip!6c3d72fe83ff76e68a70fa59d92ae5f7!1.04
|
||||
amd64!arcconf!4.26.00.27449!asm_linux_x64_v7_31_18856.tgz!http://download.adaptec.com/raid/storage_manager/asm_linux_x64_v7_31_18856.tgz!f9f13c1f9223da6138abc2c8bdadd54a!7.31
|
||||
i386!arcconf!4.26.00.27449!asm_linux_x86_v7_31_18856.tgz!http://download.adaptec.com/raid/storage_manager/asm_linux_x86_v7_31_18856.tgz!2bdfd5e999a86ac5bd8c7b43d858fdfd!7.31
|
||||
amd64!arcconf!4.26.00.27449!libstdc++-4.1.2-55.el5.x86_64.rpm!http://vault.centos.org/5.11/os/x86_64/CentOS/libstdc++-4.1.2-55.el5.x86_64.rpm!ecbeb114ecf3a33848b7d4e1aefe65c2!centos5
|
||||
i386!arcconf!4.26.00.27449!libstdc++-4.1.2-55.el5.i386.rpm!http://vault.centos.org/5.11/os/i386/CentOS/libstdc++-4.1.2-55.el5.i386.rpm!b615266b1ddae7f9a17601179f83c826!centos5
|
||||
amd64!arcconf!4.26.00.27449!asm_linux_x64_v4.30-16038.rpm!http://download.adaptec.com/raid/aac/sm/asm_linux_x64_v4.30-16038.rpm!b7c7b25e2c7006b087997d8f0b8182cc!4.30
|
||||
i386!arcconf!4.26.00.27449!asm_linux_v4.30-16038.rpm!http://download.adaptec.com/raid/aac/sm/asm_linux_v4.30-16038.rpm!9585c8b60873b5f92be97a4355d306b7!4.30
|
||||
|
BIN
i386/4.30/arcconf
Executable file
BIN
i386/4.30/arcconf
Executable file
Binary file not shown.
BIN
i386/4.30/libstdc++-libc6.2-2.so.3
Normal file
BIN
i386/4.30/libstdc++-libc6.2-2.so.3
Normal file
Binary file not shown.
678
i386/7.31/README.TXT
Normal file
678
i386/7.31/README.TXT
Normal file
@ -0,0 +1,678 @@
|
||||
--------------------------------------------------------------------
|
||||
README.TXT
|
||||
|
||||
Adaptec Storage Manager (ASM)
|
||||
|
||||
as of May 7, 2012
|
||||
--------------------------------------------------------------------
|
||||
Please review this file for important information about issues
|
||||
and erratas that were discovered after completion of the standard
|
||||
product documentation. In the case of conflict between various
|
||||
parts of the documentation set, this file contains the most
|
||||
current information.
|
||||
|
||||
The following information is available in this file:
|
||||
|
||||
1. Software Versions and Documentation
|
||||
1.1 Adaptec Storage Manager
|
||||
1.2 Documentation
|
||||
2. Installation and Setup Notes
|
||||
2.1 Supported Operating Systems
|
||||
2.2 Minimum System Requirements
|
||||
2.3 General Setup Notes
|
||||
2.4 Linux Setup Notes
|
||||
2.5 Debian Linux Setup Notes
|
||||
3. General Cautions and Notes
|
||||
3.1 General Cautions
|
||||
3.2 General Notes
|
||||
4. Operating System-Specific Issues and Notes
|
||||
4.1 Windows - All
|
||||
4.2 Windows 64-Bit
|
||||
4.3 Linux
|
||||
4.4 Debian and Ubuntu
|
||||
4.5 FreeBSD
|
||||
4.6 Fedora and FreeBSD
|
||||
4.7 Linux and FreeBSD
|
||||
4.8 VMware
|
||||
5. RAID Level-Specific Notes
|
||||
5.1 RAID 1 and RAID 5 Notes
|
||||
5.2 RAID 10 Notes
|
||||
5.3 RAID x0 Notes
|
||||
5.4 RAID Volume Notes
|
||||
5.5 JBOD Notes
|
||||
5.6 Hybrid RAID Notes
|
||||
5.7 RAID-Level Migration Notes
|
||||
6. Power Management Issues and Notes
|
||||
7. "Call Home" Issues and Notes
|
||||
8. ARCCONF Issues and Notes
|
||||
9. Other Issues and Notes
|
||||
|
||||
--------------------------------------------------------------------
|
||||
1. Software Versions and Documentation
|
||||
|
||||
1.1. Adaptec Storage Manager Version 7.3.1, ARCCONF Version 7.3.1
|
||||
|
||||
1.2. Documentation on this DVD
|
||||
|
||||
PDF format*:
|
||||
|
||||
- Adaptec Storage Manager User's Guide
|
||||
- Adaptec RAID Controller Command Line Utility User's Guide
|
||||
|
||||
*Requires Adobe Acrobat Reader 4.0 or later
|
||||
|
||||
HTML and text format:
|
||||
|
||||
- Adaptec Storage Manager Online Help
|
||||
- Adaptec Storage Manager README.TXT file
|
||||
|
||||
--------------------------------------------------------------------
|
||||
2. Installation and Setup Notes
|
||||
|
||||
- The Adaptec Storage Manager User's Guide contains complete installation
|
||||
instructions for the Adaptec Storage Manager software. The Adaptec
|
||||
RAID Controllers Command Line Utility User's Guide contains
|
||||
complete installation instructions for ARCCONF, Remote ARCCONF,
|
||||
and the Adaptec CIM Provider. The Adaptec RAID Controllers
|
||||
Installation and User's Guide contains complete installation
|
||||
instructions for Adaptec RAID controllers and drivers.
|
||||
|
||||
2.1 Supported Operating Systems
|
||||
|
||||
- Microsoft Windows*:
|
||||
|
||||
o Windows Server 2008, 32-bit and 64-bit
|
||||
o Windows Server 2008 R2, 64-bit
|
||||
o Windows SBS 2011, 32-bit and 64-bit
|
||||
o Windows Storage Server 2008 R2, 32-bit and 64-bit
|
||||
o Windows Storage Server 2011, 32-bit and 64-bit
|
||||
o Windows 7, 32-bit and 64-bit
|
||||
|
||||
*Out-of-box and current service pack
|
||||
|
||||
- Linux:
|
||||
|
||||
o Red Hat Enterprise Linux 5.7, 6.1, IA-32 and x64
|
||||
o SuSE Linux Enterprise Server 10, 11, IA-32 and x64
|
||||
o Debian Linux 5.0.7, 6.0 IA-32 and x64
|
||||
o Ubuntu Linux 10.10, 11.10, IA-32 and x64
|
||||
o Fedora Linux 14, 15, 16 IA-32 and x64
|
||||
o Cent OS 5.7, 6.2, IA-32 and x64
|
||||
o VMware ESXi 5.0, VMware ESX 4.1 Classic (Agent only)
|
||||
|
||||
- Solaris:
|
||||
|
||||
o Solaris 10,
|
||||
o Solaris 11 Express
|
||||
|
||||
- FreeBSD:
|
||||
|
||||
o FreeBSD 7.4, 8.2
|
||||
|
||||
2.2 Minimum System Requirements
|
||||
|
||||
o Pentium Compatible 1.2 GHz processor, or equivalent
|
||||
o 512 MB RAM
|
||||
o 135 MB hard disk drive space
|
||||
o Greater than 256 color video mode
|
||||
|
||||
2.3 General Setup Notes
|
||||
|
||||
- You can configure Adaptec Storage Manager settings on other
|
||||
servers exactly as they are configured on one server. To
|
||||
replicate the Adaptec Storage Manager Enterprise view tree
|
||||
and notification list, do the following:
|
||||
|
||||
1. Install Adaptec Storage Manager on one server.
|
||||
|
||||
2. Start Adaptec Storage Manager. Using the 'Add remote system'
|
||||
action, define the servers for your tree.
|
||||
|
||||
3. Open the Notification Manager. Using the 'Add system'
|
||||
action, define the notification list.
|
||||
|
||||
4. Exit Adaptec Storage Manager.
|
||||
|
||||
5. Copy the following files onto a diskette from the directory
|
||||
where the Adaptec Storage Manager is installed:
|
||||
|
||||
RaidMSys.ser --> to replicate the tree
|
||||
RaidNLst.ser --> to replicate the notification list
|
||||
RaidSMTP.ser --> to replicate the SMTP e-mail notification list
|
||||
RaidJob.ser --> to replicate the jobs in the Task Scheduler
|
||||
|
||||
6. Install Adaptec Storage Manager on the other servers.
|
||||
|
||||
7. Copy the files from the diskette into the directory where
|
||||
Adaptec Storage Manager is installed on the other servers.
|
||||
|
||||
2.4 Linux Setup Notes
|
||||
|
||||
- Because the RPM for Red Hat Enterprise Linux 5 is unsigned, the
|
||||
installer reports that the package is "Unsigned, Malicious Software".
|
||||
Ignore the message and continue the installation.
|
||||
|
||||
- To run Adaptec Storage Manager under Red Hat Enterprise Linux for
|
||||
x64, the Standard installation with "Compatibility Arch Support"
|
||||
is required.
|
||||
|
||||
- To install Adaptec Storage Manager on Red Hat Enterprise Linux,
|
||||
you must install two packages from the Red Hat installation CD:
|
||||
|
||||
o compat-libstdc++-7.3-2.96.122.i386.rpm
|
||||
o compat-libstdc++--devel-7.3-2.96.122.i386.rpm
|
||||
|
||||
NOTE: The version string in the file name may be different
|
||||
from above. Be sure to check the version string on the
|
||||
Red Hat CD.
|
||||
|
||||
For example, type:
|
||||
|
||||
rpm --install /mnt/compat-libstdc++-7.3-2.96.122.i386.rpm
|
||||
|
||||
where mnt is the mount point of the CD-ROM drive.
|
||||
|
||||
- To install Adaptec Storage Manager on Red Hat Enterprise Linux 5,
|
||||
you must install one of these packages from the Red Hat
|
||||
installation CD:
|
||||
|
||||
o libXp-1.0.0-8.i386.rpm (32-Bit)
|
||||
o libXp-1.0.0-8.x86.rpm (64-Bit)
|
||||
|
||||
- To install Adaptec Storage Manager on SuSE Linux Enterprise
|
||||
Desktop 9, Service Pack 1, for 64-bit systems, you must install
|
||||
two packages from the SuSE Linux installation CD:
|
||||
|
||||
- liblcms-devel-1.12-55.2.x86_64.rpm
|
||||
- compat-32bit-9-200502081830.x86_64.rpm
|
||||
|
||||
NOTE: The version string in the file name may be different
|
||||
from above. Be sure to check the version string on the
|
||||
installation CD.
|
||||
|
||||
- To enable ASM's hard drive firmware update feature on RHEL 64-bit
|
||||
systems, you must ensure that the "sg" module is loaded in the
|
||||
kernel. To load the module manually (if it is not loaded already),
|
||||
use the command "modprobe sg".
|
||||
|
||||
2.5 Debian Linux Setup Notes
|
||||
|
||||
- You can use the ASM GUI on Debian Linux 5.x only if you install
|
||||
the GNOME desktop. Due to a compatibility issue with X11, the
|
||||
default KDE desktop is not supported in this release.
|
||||
|
||||
- To ensure that the ASM Agent starts automatically when Debian
|
||||
is rebooted, you must update the default start and stop values
|
||||
in /etc/init.d/stor_agent, as follows:
|
||||
|
||||
·[Original]
|
||||
# Default-Start: 2 3 5
|
||||
# Default-Stop: 0 1 2 6
|
||||
|
||||
·[Modification]
|
||||
# Default-Start: 2 3 4 5
|
||||
# Default-Stop: 0 1 6
|
||||
|
||||
To activate the changes, execute 'insserv stor_agent', as root.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
3. Adaptec Storage Manager General Cautions and Notes
|
||||
|
||||
3.1 General Cautions
|
||||
|
||||
- This release supports a maximum of 8 concurrent online capacity
|
||||
expansion (OCE) tasks in the RAID array migration wizard.
|
||||
|
||||
- While building or clearing a logical drive, do not remove and
|
||||
re-insert any drive from that logical drive. Doing so may cause
|
||||
unpredictable results.
|
||||
|
||||
- Do not move disks comprising a logical drive from one controller
|
||||
to another while the power is on. Doing so could cause the loss of
|
||||
the logical drive configuration or data, or both. Instead, power
|
||||
off both affected controllers, move the drives, and then restart.
|
||||
|
||||
- When using Adaptec Storage Manager and the CLI concurrently,
|
||||
configuration changes may not appear in the Adaptec Storage
|
||||
Manager GUI until you refresh the display (by pressing F5).
|
||||
|
||||
3.2 General Notes
|
||||
|
||||
- Adaptec Storage Manager requires the following range of ports
|
||||
to be open for remote access: 34570-34580 (TCP), 34570 (UDP),
|
||||
34577-34580 (UDP).
|
||||
|
||||
- Adaptec Storage Manager generates log files automatically to
|
||||
assist in tracking system activity. The log files are
|
||||
created in the directory where Adaptec Storage Manager is
|
||||
installed.
|
||||
|
||||
o RaidEvt.log - Contains the information reported in
|
||||
Adaptec Storage Manager event viewer for all
|
||||
local and remote systems.
|
||||
|
||||
o RaidEvtA.log - Contains the information reported in
|
||||
Adaptec Storage Manager event viewer for the
|
||||
local system.
|
||||
|
||||
o RaidNot.log - Contains the information reported in the
|
||||
Notification Manager event viewer.
|
||||
|
||||
o RaidErr.log - Contains Java messages generated by
|
||||
Adaptec Storage Manager.
|
||||
|
||||
o RaidErrA.log - Contains Java messages generated by the
|
||||
Adaptec Storage Manager agent.
|
||||
|
||||
o RaidCall.log - Contains the information reported when
|
||||
statistics logging is enabled in ASM.
|
||||
|
||||
Information written to these files is appended to the existing
|
||||
files to maintain a history. However, when an error log file
|
||||
reaches a size of 5 Mbytes, it is copied to a new file with
|
||||
the extension .1 and the original (that is, the .LOG file) is
|
||||
deleted and recreated. For other log files, a .1 file is created
|
||||
when the .LOG file reaches a size of 1 Mbyte. If a .1 file already
|
||||
exists, the existing .1 file is destroyed.
|
||||
|
||||
- In the Event viewer, Adaptec Storage Manager reports both the
|
||||
initial build task for a logical drive and a subsequent Verify/Fix
|
||||
as a "Build/Verify" task.
|
||||
|
||||
- When displaying information about a physical device, the device,
|
||||
vendor and model information may be displayed incorrectly.
|
||||
|
||||
- After using a hot spare to successfully rebuild a redundant
|
||||
logical drive, Adaptec Storage Manager will continue to
|
||||
show the drive as a global hot spare. To remove the hot spare
|
||||
designation, delete it in Adaptec Storage Manager.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
4. Operating System-Specific Issues and Notes
|
||||
|
||||
4.1 Windows - All
|
||||
|
||||
- The Java Virtual Machine has a problem with the 256-color
|
||||
palette. (The Adaptec Storage Manager display may be distorted
|
||||
or hard to read.) Set the Display Properties Settings to a
|
||||
color mode with greater than 256 colors.
|
||||
|
||||
- When you shut down Windows, you might see the message
|
||||
"unexpected shutdown". Windows displays this message if the
|
||||
Adaptec Storage Manager Agent fails to exit within 3 seconds.
|
||||
It has no effect on file I/O or other system operations and can
|
||||
be ignored.
|
||||
|
||||
4.2 Windows 64-Bit
|
||||
|
||||
- Adaptec RAID controllers do not produce an audible alarm on the
|
||||
following 64-bit Windows operating systems:
|
||||
|
||||
o Windows Server 2003 x64 Edition (all versions)
|
||||
|
||||
4.3 Linux
|
||||
|
||||
- When you delete a logical drive, the operating system can no longer
|
||||
see the last logical drive. WORKAROUND: To allow Linux to see the
|
||||
last logical drive, restart your system.
|
||||
|
||||
- The controller does not support attached CD drives during OS
|
||||
installation.
|
||||
|
||||
- On certain versions of Linux, you may see messages concerning font
|
||||
conversion errors. Font configuration under X-Windows is a known
|
||||
JVM problem. It does not affect the proper operation of the
|
||||
Adaptec Storage Manager software. To suppress these messages,
|
||||
add the following line to your .Xdefaults file:
|
||||
|
||||
stringConversionWarnings: False
|
||||
|
||||
4.4 Debian and Ubuntu
|
||||
|
||||
- To create logical drives on Debian and Ubuntu installations, you
|
||||
must log in as root. It is not sufficient to start ASM with the
|
||||
'sudo /usr/StorMan/StorMan.sh' command (when not logged in as
|
||||
root). WORKAROUND: To create logical drives on Ubuntu when not
|
||||
logged in as root, install the package 'sudo dpkg -i storm_6.50-15645_amd64.deb'.
|
||||
|
||||
4.5 FreeBSD
|
||||
|
||||
- On FreeBSD systems, JBOD disks created with Adaptec Storage Manager
|
||||
are not immediately available to the OS. You must reboot the
|
||||
system before you can use the JBOD.
|
||||
|
||||
4.6 Fedora and FreeBSD
|
||||
|
||||
- Due to an issue with the Java JDialog Swing class, the 'Close'
|
||||
button may not appear on some Adaptec Storage Manager windows
|
||||
or dialog boxes under FreeBSD or Fedora Linux 15 or higher.
|
||||
WORKAROUND: Press ALT+F4 or right-click on the title bar, then
|
||||
close the dialog box from the pop-up menu.
|
||||
|
||||
4.7 Linux and FreeBSD
|
||||
|
||||
- If you cannot connect to a local or remote Adaptec Storage Manager
|
||||
installed on a Linux or FreeBSD system, verify that the TCP/IP hosts
|
||||
file is configured properly.
|
||||
|
||||
1. Open the /etc/hosts file.
|
||||
|
||||
NOTE: The following is an example:
|
||||
|
||||
127.0.0.1 localhost.localdomain localhost matrix
|
||||
|
||||
2. If the hostname of the system is identified on the line
|
||||
with 127.0.0.1, you must create a new host line.
|
||||
|
||||
3. Remove the hostname from the 127.0.0.1 line.
|
||||
|
||||
NOTE: The following is an example:
|
||||
|
||||
127.0.0.1 localhost.localdomain localhost
|
||||
|
||||
4. On a new line, type the IP address of the system.
|
||||
|
||||
5. Using the Tab key, tab to the second column and enter the
|
||||
fully qualified hostname.
|
||||
|
||||
6. Using the Tab key, tab to the third column and enter the
|
||||
nickname for the system.
|
||||
|
||||
NOTE: The following is an example of a completed line:
|
||||
|
||||
1.1.1.1 matrix.localdomain matrix
|
||||
|
||||
where 1.1.1.1 is the IP address of the server and
|
||||
matrix is the hostname of the server.
|
||||
|
||||
7. Restart the server for the changes to take effect.
|
||||
|
||||
4.8 VMware
|
||||
|
||||
- If you are unable to connect to VMware ESX Server from a
|
||||
remote ASM GUI, even though it appears in the Enterprise
|
||||
View as a remote system, most likely, some required ports
|
||||
are open and others are not. (The VMware ESX firewall blocks
|
||||
most ports, by default.) Check to make sure that all ports
|
||||
34570 thru 34581 are opened on the ESX server.
|
||||
|
||||
- After making array configuration changes in VMware, you must
|
||||
run the "esxcfg-rescan" tool manually at the VMware console
|
||||
to notify the operating system of the new target characteristics
|
||||
and/or availability. Alternatively, you can rescan from the
|
||||
Virtual Infrastructure Client: click on the host in the left
|
||||
panel, select the Configuration tab, choose "Storage Adapters",
|
||||
then, on the right side of the screen, click "Rescan".
|
||||
|
||||
- With VMware ESX 4.1, the OS command 'esxcfg-scsidevs -a'
|
||||
incorrectly identifies the Adaptec ASR-5445 controller as
|
||||
"Adaptec ASR5800". (ASM itself identifies the controller
|
||||
correctly.) To verify the controller name at the OS level,
|
||||
use this command to check the /proc file system:
|
||||
|
||||
# cat /proc/scsi/aacraid/<Node #>
|
||||
|
||||
where <Node #> is 1, 2, 3 etc.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
5. RAID Level-Specific Notes
|
||||
|
||||
5.1 RAID 1 and RAID 5 Notes
|
||||
|
||||
- During a logical device migration from RAID 1 or RAID 5 to
|
||||
RAID 0, if the original logical drive had a spare drive
|
||||
attached, the resulting RAID 0 retains the spare drive.
|
||||
Since RAID 0 is not redundant, you can remove the hot spare.
|
||||
|
||||
5.2 RAID 10 Notes
|
||||
|
||||
- If you force online a failed RAID 10, ASM erroneously shows two
|
||||
drives rebuilding (the two underlying member drives), not one.
|
||||
|
||||
- You cannot change the priority of a RAID 10 verify. Setting
|
||||
the priority at the start of a verify has no effect. The
|
||||
priority is still shown as high. Changing the priority of
|
||||
a running verify on a RAID-10 changes the displayed priority
|
||||
until a rescan is done, then the priority shows as high again.
|
||||
|
||||
- Performing a Verify or Verify/Fix on an RAID 10 displays the
|
||||
same message text in the event log: "Build/Verify started on
|
||||
second level logical drive of 'LogicalDrive_0.'" You may see the
|
||||
message three times for a Verify, but only once for a Verify/Fix.
|
||||
|
||||
5.3 RAID x0 Notes
|
||||
|
||||
- To create a RAID x0 with an odd number of drives (15, 25, etc),
|
||||
specify an odd number of second-level devices in the Advanced
|
||||
settings for the array. For a 25 drive RAID 50, for instance,
|
||||
the default is 24 drives.
|
||||
|
||||
NOTE: This differs from the BIOS utility, which creates RAID x0
|
||||
arrays with an odd number of drives by default.
|
||||
|
||||
- After building or verifying a leg of a second-level logical drive,
|
||||
the status of the second-level logical drive is displayed as a
|
||||
"Quick Initialized" drive.
|
||||
|
||||
5.4 RAID Volume Notes
|
||||
|
||||
- In ASM, a failed RAID Volume comprised of two RAID 1 logical
|
||||
drives is erroneously reported as a failed RAID 10. A failed
|
||||
RAID Volume comprised of two RAID 5 logical drives is
|
||||
erroneously reported as a failed RAID 50.
|
||||
|
||||
5.5 JBOD Notes
|
||||
|
||||
- In this release, ASM deletes partitioned JBODs without issuing
|
||||
a warning message.
|
||||
|
||||
- When migrating a JBOD to a Simple Volume, the disk must be quiescent
|
||||
(no I/O load). Otherwise, the migration will fail with an I/O Read
|
||||
error.
|
||||
|
||||
5.6 Hybrid RAID Notes
|
||||
|
||||
- ASM supports Hybrid RAID 1 and RAID 10 logical drives comprised
|
||||
of hard disk drives and Solid State Drives (SSDs). For a Hybrid
|
||||
RAID 10, you must select an equal number of SSDs and HDDs in
|
||||
“every other drive” order, that is: SSD—HDD—SSD—HDD, and so on.
|
||||
Failure to select drives in this order creates a standard
|
||||
logical drive that does not take advantage of SSD performance.
|
||||
|
||||
5.7 RAID-Level Migration (RLM) Notes
|
||||
|
||||
- We strongly recommend that you use the default 256KB stripe
|
||||
size for all RAID-level migrations. Choosing a different stripe
|
||||
size may crash the system.
|
||||
|
||||
- If a disk error occurs when migrating a 2TB RAID 0 to RAID 5
|
||||
(eg, bad blocks), ASM displays a message that the RAID 5 drive
|
||||
is reconfiguring even though the migration failed and no
|
||||
RAID-level migration task is running. To recreate the
|
||||
logical drive, fix or replace the bad disk, delete the RAID 5
|
||||
in ASM, then try again.
|
||||
|
||||
- When migrating a RAID 5EE, be careful not to remove and re-insert
|
||||
a drive in the array. If you do, the drive will not be included
|
||||
when the array is rebuilt. The migration will stop and the drive
|
||||
will be reported as Ready (not part of array).
|
||||
|
||||
NOTE: We strongly recommend that you do not remove and re-insert
|
||||
any drive during a RAID-level migration.
|
||||
|
||||
- When migrating a RAID 6 to a RAID 5, the migration will fail if
|
||||
the (physical) drive order on the target logical device differs
|
||||
from the source; for instance, migrating a four-drive RAID 6 to
|
||||
a three-drive RAID 5.
|
||||
|
||||
- Migrating a RAID 5 with greater than 2TB capacity to RAID 6 or
|
||||
RAID 10 is not supported in this release. Doing so may crash
|
||||
the system.
|
||||
|
||||
- When migrating from a RAID 0 to any redundant logical drive,
|
||||
like RAID 5 or 10, Adaptec Storage Manager shows the status as
|
||||
"Degraded Reconfiguring" for a moment, then the status changes
|
||||
to "Reconfiguring." The "Degraded" status does not appear in
|
||||
the event log.
|
||||
|
||||
- The following RAID-level migrations and online capacity
|
||||
expansions (OCE) are NOT supported:
|
||||
|
||||
o RAID 50 to RAID 5 RLM
|
||||
o RAID 60 to RAID 6 RLM
|
||||
o RAID 50 to RAID 60 OCE
|
||||
|
||||
- During a RAID-level migration, ASM and the BIOS utility show
|
||||
different RAID levels while the migration is in progress. ASM shows
|
||||
the target RAID level; the BIOS utility shows the current RAID level.
|
||||
|
||||
- If a disk error occurs during a RAID-level migration (eg, bad blocks),
|
||||
the exception is reported in the ASM event viewer (bottom pane)
|
||||
and in the support archive file (Support.zip, Controller 1 logs.txt),
|
||||
but not in the main ASM Event Log file, RaidEvtA.log.
|
||||
|
||||
- Always allow a RAID-level migration to complete before gathering
|
||||
support archive information in Support.zip. Otherwise, the Support.zip
|
||||
file will include incorrect partition information. Once the RLM is
|
||||
complete, the partition information will be reported correctly.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
6. Power Management Issues and Notes
|
||||
|
||||
- You must use a compatible combination of Adaptec Storage Manager
|
||||
and controller firmware and driver software to use the power
|
||||
management feature. All software components must support power
|
||||
management. You can download the latest controller firmware
|
||||
and drivers from the Adaptec Web site at www.adaptec.com.
|
||||
|
||||
- Power management is not supported under FreeBSD.
|
||||
|
||||
- Power management settings apply only to logical drives in the
|
||||
Optimal state. If you change the power settings on a Failed
|
||||
logical drive, then force the drive online, the previous
|
||||
settings are reinstated.
|
||||
|
||||
- After setting power values for a logical drive in ARCCONF, the
|
||||
settings are not updated in the Adaptec Storage Manager GUI.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
7. "Call Home" Issues and Notes
|
||||
|
||||
- The Call Home feature is not supported in this release. To gather
|
||||
statistics about your system for remote analysis, enable statistics
|
||||
logging in ASM, then create a Call Home Support Archive. For more
|
||||
information, see the user's guide.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
8. ARCCONF Issues and Notes
|
||||
|
||||
- With VMware ESX 4.1, you cannot delete a logical drive
|
||||
with ARCCONF. WORKAROUND: Connect to the VMware machine from a
|
||||
remote ASM GUI, then delete the logical drive.
|
||||
|
||||
- With Linux kernel versions 2.4 and 2.6, the ARCCONF
|
||||
DELETE <logical_drive> command may fail with a Kernel Oops
|
||||
error message. Even though the drives are removed from the
|
||||
Adaptec Storage Manager GUI, they may not really be deleted.
|
||||
Reboot the controller; then, issue the ARCCONF DELETE command
|
||||
again.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
9. Other Issues and Notes
|
||||
|
||||
- Some solid state drives identify themselves as ROTATING media.
|
||||
As a result, these SSDs:
|
||||
|
||||
o Appear as SATA drives in the ASM Physical Devices View
|
||||
o Cannot be used as Adaptec maxCache devices
|
||||
o Cannot be used within a hybrid RAID array (comprised of
|
||||
SSDs and hard disks)
|
||||
|
||||
- The blink pattern on Adaptec Series 6/6Q/6E/6T controllers differs
|
||||
from Series 2 and Series 5 controllers:
|
||||
|
||||
o When blinking drives in ASM, the red LED goes on and stays solid;
|
||||
on Series 2 and 5 controllers, the LED blinks on and off.
|
||||
|
||||
o When failing drives in ASM (using the 'Set drive state to failed'
|
||||
action), the LED remains off; on Series 2 and 5 controllers, the
|
||||
LED goes on and remains solid.
|
||||
|
||||
- Cache settings for RAID Volumes (Read cache, Write cache, maxCache)
|
||||
have no effect. The cache settings for the underlying logical
|
||||
devices take priority.
|
||||
|
||||
- On rare occasions, ASM will report invalid medium error counts on
|
||||
a SATA hard drive or SSD. To correct the problem, use ARCCONF to
|
||||
clear the device counts. The command is:
|
||||
|
||||
arcconf getlogs <Controller_ID> DEVICE clear
|
||||
|
||||
- On rare occasions, ASM lists direct-attached hard drives and SSDs
|
||||
as drives in a virtual SGPIO enclosure. Normally, the drives are
|
||||
listed in the Physical Devices View under ports CN0 and CN1.
|
||||
|
||||
- Hard Drive Firmware Update Wizard:
|
||||
|
||||
o Firmware upgrade on Western Digital WD5002ABYS-01B1B0 hard drives
|
||||
is not supported for packet sizes below 2K (512/1024).
|
||||
|
||||
o After flashing the firmware of a Seagate Barracuda ES ST3750640NS
|
||||
hard drive, you MUST cycle the power before ASM will show the new
|
||||
image. You can pull out and re-insert the drive; power cycle the
|
||||
enclosure; or power cycle the system if the drive is attached directly.
|
||||
|
||||
- Secure Erase:
|
||||
|
||||
o If you reboot the system while a Secure Erase operation is in
|
||||
progress, the affected drive may not be displayed in Adaptec
|
||||
Storage Manager or other Adaptec utilities, such as the ACU.
|
||||
|
||||
o You can perform a Secure Erase on a Solid State Drive (SSD) to
|
||||
remove the metadata. However, the drive will move to the Failed
|
||||
state when you reboot the system. To use the SSD, reboot to
|
||||
the BIOS, then initialize the SSD. After initialization, the SSD
|
||||
will return to the Ready state. (A SSD in the Failed state cannot
|
||||
be initialized in ASM.)
|
||||
|
||||
- The Repair option in the ASM Setup program may fail to fix a
|
||||
corrupted installation, depending on which files are affected.
|
||||
The repair operation completes successfully, but the software
|
||||
remains unfixed.
|
||||
|
||||
- Adaptec Storage Manager may fail to exit properly when you create
|
||||
64 logical devices in the wizard. The logical devices are still
|
||||
created, however.
|
||||
|
||||
- The "Clear logs on all controllers" action does not clear events
|
||||
in the ASM Event Viewer (GUI). It clears device events, defunct
|
||||
drive events, and controller events in the controllers' log files.
|
||||
To clear events in the lower pane of the GUI, select Clear
|
||||
configuration event viewer from the File menu.
|
||||
|
||||
- Stripe Size Limits for Large Logical Drives:
|
||||
|
||||
The stripe size limit for logical drives with more than 8 hard
|
||||
drives is 512KB; for logical drives with more than 16 hard
|
||||
drives it is 256KB.
|
||||
|
||||
- Agent Crashes when Hot-Plugging an Enclosure:
|
||||
|
||||
With one or more logical drives on an enclosure, removing
|
||||
the enclosure cable from the controller side may crash
|
||||
the ASM Agent.
|
||||
|
||||
--------------------------------------------------------------------
|
||||
(c) 2012 PMC-Sierra, Inc. All Rights Reserved.
|
||||
|
||||
This software is protected under international copyright laws and
|
||||
treaties. It may only be used in accordance with the terms
|
||||
of its accompanying license agreement.
|
||||
|
||||
The information in this document is proprietary and confidential to
|
||||
PMC-Sierra, Inc., and for its customers' internal use. In any event,
|
||||
no part of this document may be reproduced or redistributed in any
|
||||
form without the express written consent of PMC-Sierra, Inc.,
|
||||
1380 Bordeaux Drive, Sunnyvale, CA 94089.
|
||||
|
||||
P/N DOC-01700-02-A Rev A
|
BIN
i386/7.31/arcconf
Executable file
BIN
i386/7.31/arcconf
Executable file
Binary file not shown.
BIN
i386/7.31/libstdc++.so.5
Executable file
BIN
i386/7.31/libstdc++.so.5
Executable file
Binary file not shown.
Loading…
x
Reference in New Issue
Block a user