Tuesday, April 13, 2010

Vision

Vision


Which communication ports does Symantec Endpoint Protection 11.0 use?

Posted: 13 Apr 2010 12:30 AM PDT

The Symantec Endpoint Protection Manager (SEPM) use two web servers: Internet Information Services (IIS) and Tomcat. IIS uses port 80 (or 8014) and 443 - Tomcat uses port 9090 and 8443. The communication between IIS and Tomcat uses the HTTP protocol. IIS uses port 9090 to talk to Tomcat, Tomcat uses port 80 to talk to IIS.


Client-Server Communication:
For IIS SEP uses HTTP or HTTPS between the clients or Enforcers and the server. For the client server communication it uses port 80 (or 8014) and 443 by default. In addition, the Enforcers use RADIUS to communicate in real-time with the manager console for clients authentication. This is done on UDP port 1812.


Remote Console:
9090 is used by the remote console to download .jar files and display the help pages.
8443 is used by the remote console to communicate with SEPM and the Replication Partners to replicate data.


Client-Enforcer Authentication:

The clients communicate with the Enforcer using a proprietary communication protocol. This communication uses a challenge-response to authenticate the clients. The default port for this is UDP 39,999.


View System Logs Live

Posted: 13 Apr 2010 12:24 AM PDT

If you want to monitor the system logs in a live environment, you can use the tail command with an option -f or -F.
This will work in any Linux/UNIX environment.
To exit from the view, you can use Ctrl + C. The same option can be used with any text based files to find the updation to the file in live environment.

Usage:

# tail -f /var/log/messages

or

# tail -F /var/adm/messages


tail -f will work as long as the underlying file we're trying to read doesn't change. If you're trying to read a link to a file and the original file (and thus the link) changes, tail -f will not work.

For that reason, its better to use tail -F for such files as described above … or tail -f –retry


How Network Access Controls Work?

Posted: 13 Apr 2010 12:20 AM PDT

Network access control is the process of restricting the access to network resources by devices that are used by the end user. Network access controls implement a defined security policy for access which is supported by a network access server that performs the authentication and authorization. The server also restricts the data that each user can access, as well as the activities that can be performed by the end user once they gain access to the network.




How Network Access Controls Work

There are several different types of network access controls that perform different functions according to the needs of the organization and the level of security that is required for performing daily functions.



•Agent-Based Network Access Control: An agent-based network access control operates through the endpoint device (user's device) which provides a higher level of security and ensures that the end-user is complying with security policies. The unit continually operates in the background of the device to monitor security compliance and then sends periodic updates to the policy server.

•Agentless Network Access Control: An agentless network access control does not require any added installations. Instead this type of network access control assesses compliancy on both endpoints before the user is allowed to access the network. The problem with this type of network access control is that authorization is provided through the assessment of network traffic. This makes the application easier to exploit to gain unauthorized access to the network system.

•Hardware-Based Network Access Control: A hardware-based network access control works through an appliance that is installed on the network and functions in conjunction with the network traffic. This type of network access control requires changes in the infrastructure and operational practices to allow for defined access by the end user. Because implementation requires significant server configuration changes, the chances of failure are greater than other network access control systems.•Dynamic Network Access Control: Dynamic network access control is the easiest form of deployment for controlling access by end users. This is because the system does not require any software or hardware appliance installation or changes in the network configuration. Instead a dynamic network access control works on specific computers that are connected to a local area network and are considered to be trusted systems. When an unauthorized user attempts to access the network, the trusted systems will restrict access and then communicate the action to the main policy server.

The type of network access control you choose for your organization depends upon your network configuration and set up. Before choosing an option that suits your organization, it is necessary to evaluate the network configuration and the different reasons for defining access by the end users.


Run Levels of various Operating Systems

Posted: 12 Apr 2010 09:09 PM PDT

The term runlevel refers to a mode of operation in one of the computer operating systems that implement Unix System V-style initialization. Conventionally, seven runlevels exist, numbered from zero to six; though up to ten, from zero to nine, may be used. S is sometimes used as a synonym for one of the levels.




In standard practice, when a computer enters runlevel zero, it halts, and when it enters runlevel six, it reboots. The intermediate runlevels (1-5) differ in terms of which drives are mounted, and which network services are started. Lower run levels are useful for maintenance or emergency repairs, since they usually don't offer any network services at all. The particular details of runlevel configuration differ widely among operating systems, and slightly among system administrators.



The runlevel system replaced the traditional /etc/rc script used in Version 7 Unix.



Run Levels in Solaris

S, s

Single user mode. Doesn't require properly formated /etc/inittab. Filesystems required for basic system operation are mounted.



0

Go into firmware (sparc)



1

System Administrator mode. All local filesystems are mounted. Small set of essential system processes are running. Also a single user mode.



2

Put the system in multi-user mode. All multi-user environment terminal processes and daemons are spawned.



3

Extend multi-user mode by making local resources available over the network.



4

Is available to be defined as an alternative multi-user environment configuration. It is not necessary for system operation and is usually not used.



5

Shut the machine down so that it is safe to remove the power. Have the machine remove power, if possible.



6

Reboot



a, b, c

Process only those /etc/inittab entries having the a, b, or c run level set. These are pseudo-states, which may be defined to run certain commands, but which do not cause the current run level to change.



Q, q

Re-examine /etc/inittab.



Run Levels in HP-UX

0

System is completely shut down. All processes are terminated and all file systems are unmounted.



1,s,S

Single-user mode. All system services and daemons are terminated and all file systems are unmounted.



2

Multi-user mode, except NFS is not enabled.



3

Multi-user mode. This is the normal operational default state. NFS is enabled.



4

Multi-user mode with NFS and VUE. (VUE is HP's desktop, kinda like CDE)



6

Reboot.



Run Levels in OpenBSD

-1

Permanently insecure mode – always run system in level 0 mode.



0

Insecure mode – immutable and append-only flags may be changed. All devices may be read or written subject to their permissions.



1

Secure mode – system immutable and append-only flags may not be turned off; disks for mounted filesystems, /dev/mem, and /dev/kmem are read-only.



2

Highly secure mode – same as secure mode, plus disks are always read-only whether mounted or not and the settimeofday(2) system call can only advance the time.



Run Levels in ULTRIX, Digital UNIX / Tru64

0

System is completely shut down. All processes are terminated and all file systems are unmounted.



1

Single-user mode. All system services and daemons are terminated and all file systems are unmounted.



2

Multi-user mode, except NFS is not enabled.



3

Multi-user mode. This is the normal operational default state. NFS is enabled.



4

Not Used



5

Not Used



6

Reboot



Run Levels in Irix

0

Shut the machine down so it is safe to remove the power. Have the machine remove power if it can.



1

Put the system into system administrator mode. All filesystems are mounted. Only a small set of essential kernel processes run. This mode is for administrative tasks such as installing optional utilities packages. All files are accessible and no users are logged in on the system.



2

Put the system into multi-user state. All multi-user environment terminal processes and daemons are spawned. Default.



3

Start the remote file sharing processes and daemons. Mount and advertise remote resources. Run level 3 extends multi-user mode and is known as the remote-file-sharing state.



4

Define a configuration for an alternative multi-user environment. This state is not necessary for normal system operations; it is usually not used.



5

Stop the IRIX system and enter firmware mode.



6
Stop the IRIX system and reboot to the state defined by the initdefault entry in inittab.



a,b,c

Process only those inittab entries for which the run level is set to a, b, or c. These are pseudo-states that can be defined to run certain commands but do not cause the current run level to change.



Q,q

Re-examine inittab.



S,s

Enter single-user mode. When the system changes to this state as the result of a command, the terminal from which the command was executed becomes the system console.



Run Levels in SYSV

The following is from a SYSV text book, it's the generally used run level for SYSV systems.



0

Power-down state. Shuts machine down gracefully so that it can be turned off. Some models turn off automatically.



s

Single user state. This run level should be used when installing or removing software utilities, checking file systems, or using Maintenance (/install) file system. It is similar to run level 1; however, in run level s, multi-user file systems are unmounted and daemons are stopped. The terminal issuing the init s becomes the console.



1

Administrative state. In run level 1, file systems required for multi-user operations are mounted. And loggias requiring access to multi-user file systems can be used.



2

Multi-user state. File systems are mounted and normal user services are started.



3

Network File System (NFS) state. Prepares your system to use NFS.



4

User-defined



5

Virtually the same as System State 6. See /sbin/rc0 script for details. Early versions of UNIX used this as an entry to a firmware interface.



6

Power-down and reboot to the state defined by the initdefault entry in the /etc/inittab file.



Run Levels in Linux

0

Halt the system.



1

Single-user mode.



2-4

Multi-user modes. Usually identical. Level 2 or 3 is default (dependent on distro).



5

Multi-user with graphical environment. This applies to most (but not all) distros.



6

Reboot the system and return to default run level.


Booting microchannel systems into Service mode

Posted: 12 Apr 2010 09:02 PM PDT

To boot microchannel systems into Service mode, turn the key to the Maintenance position and press the yellow reset button twice. You must boot from bootable media, such as an installation CD-ROM, installation tape, or a bootable backup tape made via the mksysb command or the Sysback product of the correct level for this machine.




For AIX Version 3.2, you may use bootable bosboot diskettes. To boot from these, insert the first bosboot diskette into the diskette drive. When you see LED c07, insert the next diskette, which is usually the display extensions diskette. After this diskette is read, you should receive a menu prompting you for the installation diskette.



For information on accessing your rootvg volume group, see the section entitled "Accessing rootvg and mounting file systems".

The preceding discussion assumes that the Service mode bootlist has not been modified from the default bootlist. If the bootlist has been modified, it must be reset such that one of the boot media types from the preceding selections is before the standard boot media, such asthe hard disk.



If the machine is an SMP model (7012-Gxx, 7013-Jxx, and 7015-Rxx) and the Autoservice IPL flag is disabled, then a menu like the following will display when it is booting in Service mode:

MAINTENANCE MENU (Rev. 04.03)

0> DISPLAY CONFIGURATION

1> DISPLAY BUMP ERROR LOG

2> ENABLE SERVICE CONSOLE

3> DISABLE SERVICE CONSOLE

4> RESET

5> POWER OFF

6> SYSTEM BOOT

7> OFF-LINE TESTS

8> SET PARAMETERS

9> SET NATIONAL LANGUAGE

SELECT:

You can boot these machines into Service mode or even Normal mode with the Fast IPL Flag set. If you do not, the machine can take anywhere from 30 to 60 minutes to boot up. There are a few ways to set the Fast IPL Flag for these machines.

NOTE: The console must be an ASCII type and connected to the S1 port of the system. Graphic monitors will not work.

Use the following instructions to boot SMP machines into service with Fast IPL set.

1. Insert the bootable media of the same OS Level. Use the mksysb/cd-rom command.

2. Turn off the machine by pressing the white button on front.

3. Turn the key to the Wrench or Service position.

4. The LCD should read STAND-BY.

5. Press the Enter key on the console.

6. A greater-than prompt (>) should display on the monitor.

7. Type in sbb followed by the Enter key.

8. The menu Stand By Menu should now display.

9. Select 1 Set Flags. This will take you to another set of menus.

10. Select 6 Fast IPL. This should change to enable after it is selected.

11. Enter x to exit the second set of menus.

12. Enter x to exit the first menu.

13. At a blank screen, press the Enter key to obtain the greater-than prompt (>).

14. Type in the word power followed by the Enter key.

15. Turn the machine back on. It should start to boot up. A prompt may display asking if you want to update the firmware. Do not respond; let it continue.

16. Now you may be at the Maintenance Menu with 10 options displayed, 0 through 9. If that is the case, select option 6, System Boot. This will take you to another menu. Select option 0, Boot from the list.

17. The Standard Maintenance Menu should display. System recovery and maintenance can be completed from here.

18. After system recovery and maintenance has been performed, the system is ready to be rebooted into Normal mode. Enter the command mpcfg -cf 11 1 at the command line prompt, then press Enter. This will set the Fast IPL Flag. The system is ready to reboot.

19. Turn the key back to the OK/Normal position.

20. Enter shutdown -Fr, followed by the Enter key.

________________________________________

Booting PCI-based systems into Service mode

When booting a PowerPC into Service mode, cd0 or rmt0 must be before the hdisk in the bootlist. If not, change the bootlist at boot time. On some models, you can set the machine to use a default bootlist that includes both cd0 and rmt0. If a bootable CD or tape is in the CD-ROM or tape drive, the machine will boot from this device.

For most of the newer PCI-based models, selecting the default bootlist, with a bootable tape or CD loaded in the machine, causes the system to automatically boot from that device. Generally, the next menu on the screen asks the administrator to define the system console.

For all machines discussed here, if you are using a graphical terminal, you will use a function key such as F5. If you are using an ASCII terminal, use an equivalent number key such as 5. Use the numbers across the top of the keyboard, not the numbers on the numeric keypad. On ASCII terminals, the icons may not be displayed on the screen; the number can be pressed between the second and third beeps, the second beep being a series of three clicks.

________________________________________

PCI machine-specific information

The following systems all use the F5 or 5 key to read from the default boot list, which is written into the system firmware:

MODEL 7017 7024 7025 7026 7043 7137

——- ——- ——- ——- ——- ——- ——-

TYPE S70 E20 F30 H10 43P-140 F3L

S7A E30 F40 H50 43P-150

S80 F50 H70 43P-240

B80 43P-260

On these machines, use 5 (on the keyboard, not the keypad) if you are using an ASCII terminal. On a locally attached graphics console, use the F5 function key. The F5 or 5 key must be pressed just after the keyboard icon or message is displayed on the console. If you have either a 7026-M80, 7026-H80 or a 7025-F80, then the 5 key will be the default whether you have an ascii or graphics console.

The following systems use the F1 key to enter System Management Services mode (SMS):

MODEL 6040 7042 7247 7249

——- ——- ——- ——- ——-

TYPE 620 850 82x 860

You should be in an Easy-Setup menu. Select the Start Up menu. Clear the current bootlist settings and then select the CD-ROM for choice 1 and hdd (the hard disk) for choice 2. Select OK. Insert the CD-ROM and select the EXIT icon. The machine should now boot from the CD-ROM.

The following systems use the F2 key to enter SMS:

MODEL 6015 6050 6070 7020 7248

——- ——- ——- ——- ——- ——-

TYPE 440 830 850 40P 43P

Select Select Boot Device from the initial menu on the screen, and then select Restore Default Settings from the list. Press the Esc key to exit all the menus, and then reboot the machine. The system should boot from your bootable media.

For information on accessing the rootvg volume group, see the next section in this document.

________________________________________

Accessing rootvg and mounting file systems

For AIX Version 3, choose the limited function maintenance shell (option 5 for AIX 3.1, option 4 for AIX 3.2).

If you only have one disk on the system, then hdisk0 will be used in the execution of the getrootfs or /etc/continue commands, which follow. If you have more than one disk, determine which disk contains the boot logical volume in this manner:

AIX 3.2.4 or AIX 3.2.5:

Run getrootfs; the output will indicate which disk contains the hd5 logical volume.

AIX 3.1 to AIX 3.2.3e:

Run lqueryvg -Ltp hdisk# for each hdisk. You can obtain a listing of these with the command lsdev -Cc disk. Repeat this command until you get output similar to the following:

00005264feb3631c.2 hd5 1

If more than one disk contains this output, use any disk when running getrootfs.

Now, access the rootvg volume group by running one of the following commands, using the disk you obtained in the preceding step:

AIX 3.1: /etc/continue hdisk#

AIX 3.2.0-3.2.3e: getrootfs -f hdisk#

AIX 3.2.4-3.2.5: getrootfs hdisk#

NOTE: If you want to leave the primary OS file systems (/, /usr, /tmp, and /var) unmounted after this command has completed, to run fsck, for instance, place a space and the letters sh after the hdisk in the preceding command. For example:

getrootfs hdisk0 sh

For AIX Versions 4 and 5, choose Start Maintenance Mode for System Recovery , option 3. The next screen will be called Maintenance; select option 1, Access a Root Volume Group. At the next screen, type 0 to continue, and select the appropriate volume group by typing the number next to it. A screen like the following will display.

Example:

Access a Root Volume Group

Type the number for a volume group to display the logical volume information and press Enter.

1) Volume Group 0073656f2608e46a contains these disks:

hdisk0 2063 04-C0-00-4,0

Once a volume group has been selected, information will be displayed about that volume group.

Example:

Volume Group Information

——————————————————————————

Volume Group ID 0073656f2608e46a includes the following logical volumes:

hd6 hd5 hd8 hd4 hd2 hd9var

hd3 hd1

——————————————————————————

Type the number of your choice and press Enter.

1) Access this Volume Group and start a shell

2) Access this Volume Group and start a shell before mounting filesystems

99) Previous Menu

If the logical volumes listed do not include logical volumes like hd4, hd2, hd3, and so on, you may have selected the wrong volume group. Press 99 to back up one screen and select again.

Now you may select one of two options: Access this volume group and start a shell , option 1, or Access this volume group and start a shell before mounting file systems , option 2. Option 2 allows you to perform file system maintenance on /, /usr, /tmp, and /var before mounting them.

NOTE: If you intend to use SMIT or vi, set your terminal type in preparation for editing the file. xxx stands for a terminal type such as lft, ibm3151, or vt100.

TERM=

export TERM

Errors from these steps may indicate failed or corrupt disks in rootvg. These problems should be corrected. For additional assistance, contact your vendor, your local branch office, or your AIX support center.

________________________________________

Known problems

NOTE: Ensure you are using an original AIX base media to boot from, rather than a burned copy.

You may receive the following error when trying to access rootvg in service mode at AIX 5.1:

Examine .loader section symbols with the 'dump -Tv' command.

Could not load program /usr/bin/ksh: Symbol resolution failed for /usr/lib/libc.a (shr.o)

because: OID Symbol getvtid (number 258) is not exported from dependent RS4

module /unix.

This error is likely due to a mismatch of the boot media and the system's AIX level.

Solution

Use a non-auto install mksysb from the same system, or use AIX CD media labeled LCD4-1061-04 or higher (9/23/2002, integranted ML03)

________________________________________

Related documentation

For more in-depth coverage of this subject, the following IBM publication is recommended:

AIX Version 4.3 System Management Guide: Operating System and Devices

AIX Version 5.1 System Management Guide: Operating System and Devices


Installing AIX for Tape Backup

Posted: 12 Apr 2010 08:37 PM PDT

The AIX operating system can be installed from a system backup tape created using smitty mksysb




To install AIX from a system backup:

1. Make sure that the tape drive is turned ON.

2. Make sure that the server is turned ON.

3. Open the tape drive door.

4. Turn the key to "Service".

5. Insert the AIX Operating System backup tape into the tape drive.

6. Close the tape drive door.

7. On the server, press the "Reset" button twice.

If the TESTING COMPLETED screen displays, press [[Enter]] to continue.

Note: For a few minutes the system might appear idle. Do not open the tape drive door. Wait for the next screen to display.

8. The following message displays: "Please define the system Console"

Press [[F1]] to define the system console and then press [[Enter]]. The INSTALLATION AND MAINTENANCE screen appears.

9. Select Install a System that was created with SMIT "Backup The System" function or the "mksysb" command

Press [[Enter]] to install the operating system from the backup tape. The CURRENT SYSTEM SETTINGS screen displays.

10. Verify that the system settings are correct. If the correct settings are displayed, select Install a SMIT "Backup The System" image with the current setting. Press [[Enter]]. The Final Warning screen displays.

11. Select Continue with installation. Press [[Enter]].

12. Press [[Enter]] to start the tape. The installation takes 45 minutes to 1.5 hours.

13. Turn the key to "Normal" before the installation completes. When the installation is complete, a screen displays indicating that the AIX Base Operating System installation is complete.

14. Remove the AIX Operating System backup tape from the tape drive.

15. Press [[Enter]] to reboot the server.

16. During rebooting ignore the following error messages:

The System Resouce Controler daemon is not active. Machine not identical previous configuration. Shutdown, rebooting



Note: If the system used to create the backup tape is not the same as the system on which it is now being installed, the server might reboot two or three times.



Each time the server reboots, the system reconfigures. When the server reboots successfully, a login prompt displays.


Friday, April 9, 2010

Linux Software RAID10

Linux Software RAID10




Creating RAID Arrays in Linux during installation is an easy task (using the disk druid or any such graphical installers). it's best to keep your root filesystem out of both RAID and LVM for easier management and recovery.









Linux RAID and Hardware



I've seen a lot of confusion about Linux RAID, so let's clear that up. Linux software RAID has nothing to do with hardware RAID controllers. You don't need an add-on controller, and you don't need the onboard controllers that come on most motherboards. In fact, the lower-end PCI controllers and virtually all the onboard controllers are not true hardware controllers at all, but software-assisted, or fake RAID. There is no advantage to using these, and many disadvantages. If you have these, make sure they are disabled.

Ordinary PC motherboards support up to six SATA drives, and PCI SATA controllers provide an easy way to add more. Don't forget to scale up your power and cooling as you add drives.

If you're using PATA disks, only use one per IDE controller. If you have both a master/slave on a single IDE controller, performance will suffer and any failure risks bringing down both the controller and the second disk.







GRUB Follies



GRUB Legacy's (v. 0.9x) lack of support for RAID is why we have to jump through hoops just to boot the darned thing. Beware your Linux's default boot configuration, because GRUB must be installed to the MBRs of at least the first two drives in your RAID1 array, assuming you want it to boot when there is a drive failure. Most likely your Linux installer only installs it to the MBR of the drive that is first in the BIOS order, so you'll need to manually install it on a secondary disk.

First open the GRUB command shell. This example installs it to /dev/sdb, which GRUB sees as hd1 because it is the second disk on the system:





root@uberpc ~# grub

GNU GRUB version 0.97 (640K lower / 3072K upper memory)



Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.



grub> root (hd1,0)

Filesystem type is ext2fs, partition type 0xfd



grub> setup (hd1)

Checking if "/boot/grub/stage1" exists... yes

Checking if "/boot/grub/stage2" exists... yes

Checking if "/boot/grub/e2fs_stage1_5" exists... yes

Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 17 sectors are embedded. succeeded

Running "install /boot/grub/stage1 (hd1) (hd1)1+17 p (hd1,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded

Done.



You can do this to every disk in your RAID 1 array. /boot/grub/menu.lst should have a default entry that looks like something like this:



title Ubuntu 7.10, kernel 2.6.22-14-generic, default root (hd0,0)

kernel /boot/vmlinuz-2.6.22-14-generic root=/dev/md0 ro

initrd /boot/initrd.img-2.6.22-14-generic



Let's say hd0,0 is really /dev/sda1. If this disk fails, the next drive in line becomes hd0,0, so you only need this single default entry.

GRUB sees PATA drives first, SATA drives second. Let's say you have two PATA disks and two SATA disks:





/dev/hda

/dev/hdb

/dev/sda

/dev/sdb



GRUB numbers them this way:





hd0

hd1

hd2

hd3

If you have one of each, /dev/hda=hd0, and /dev/sda=hd1. The safe way to test your boot setup is to power off your system and disconnect your drives one at a time.







Managing Linux RAID With mdadm



There are still a lot of howtos on the Web that teach the old md command and raidtab file. Don't use these. They still work, but the mdadm command does more and is easier.







Creating and Testing New Arrays



Use this command to create a new array:







# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2



You may want to have a hot spare. This is a partitioned, formatted hard disk that is connected but unused until an active drive fails, then mdadm (if it is running in daemon mode, see the Monitoring section) automatically replaces the failed drive with the hot spare. This example includes one hot spare:







# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2 --hot-spares=1 /dev/sdb2





You can test this by "failing" and removing a partition manually:







# mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2





Then run some querying commands to see what happens.

When you have more than one array, they can share a hot spare. You should have some lines in /etc/mdadm.conf that list your arrays. All you do is create a share group by adding lines as shown in bold:







ARRAY /dev/md0 level=raid1 num-devices=2 UUID=004e8ffd:05c50a71:a20c924c:166190b6

shared-group=share1

ARRAY /dev/md1 level=raid10 num-devices=2 UUID=38480e56:71173beb:2e3a9d03:2fa3175d

shared-group=share1





View the status of all RAID arrays on the system:







$ cat /proc/mdstat

Personalities : 'linear' 'multipath' 'raid0' 'raid1' 'raid6' 'raid5' 'raid4' 'raid10'

md1 : active raid10 hda2'0' sda2'1'

6201024 blocks 2 near-copies '2/2' 'UU'



md0 : active raid1 hda1'0' sda1'1'

3076352 blocks '2/2' 'UU'



The "personalities" line tells you what RAID levels the kernel supports. In this example you see two separate arrays: md1 and md0, that are both active, their names and BIOS order, and the size and RAID type of each one. 2/2 means two of two devices are in use, and UU means two up devices.



You can get detailed information on individual arrays:







# mdadm --detail /dev/md0

Is this partition part of a RAID array? This displays the contents of the md superblock, which marks it as a member of a RAID array:





# mdadm --examine /dev/hda1

You can also use wildcards, like mdadm --examine /dev/hda*.







Monitoring



mdadm itself can run in daemon mode and send you email when an active disk fails, when a spare fails, or when it detects a degraded array. Degraded means a new array that has not yet been populated with all of its disks, or an array with a failed disk:





# mdadm --monitor --scan --mail=shiroy.p@hcl.in --delay=2400 /dev/md0





Your distribution may start the mdadm daemon automatically, so you won't need to run this command. Kubuntu controls it with /etc/init.d/mdadm, /etc/default/mdadm, and /etc/mdadm/mdadm.conf, so all you need to do is add your email address to /etc/mdadm/mdadm.conf.







Starting, Stopping, and Deleting RAID



Your Linux distribution should start your arrays automatically at boot, and mdadm starts them at creation.

This command starts an array manually:





# mdadm -A /dev/md0



This command stops it:





# mdadm --stop /dev/md0





You'll need to unmount all filesystems on the array before you can stop it.

To remove devices from an array, they must first be failed. You can fail a healthy device manually:





# mdadm /dev/md1 --fail /dev/sda2 --remove /dev/sda2



If you're removing a healthy device and want to use it for something else, or just want to wipe everything out and start over, you have to zero out the superblock on each device or it will continue to think it belongs to a RAID array:





# mdadm --zero-superblock /dev/sda2

Adding Devices

You can add disks to a live array with this command:





# mdadm /dev/md1 --add /dev/scd2





This will take some time to rebuild, just like when you create a new array.

That wraps up ourtour of RAID 10 and mdadm.







Resources



man mdadm

Serial ATA (SATA) for Linux

GRUB manual

BAARF: Battle Against Any RAID Five

Basic RAID





Visit my BLOG

Tuesday, April 6, 2010

iKill - Prevents Spread of Viruses

iKill - Prevents Spread of Viruses

Version : 3.2.0
Build : 190
Build Date : 21 March 2010

Minimum Requirements:

1) OS - Windows [98, 2K, XP, Server 2003, Vista...].
2) Processor 400MHz+
3) Ram - 96 MB
4) Microsoft .NET Framework 2.0 (See download link below)
IF using Windows XP or below.

Installation:

Before installing make sure you have Microsoft .NET Framework 2.0 .
Icons will be automatically added to the desktop and the start menu.

Working:

1) Anti Virus
- It works by scanning the drives for the presence of removable drives.
- If found, it parses the autorun.inf file for the executables it may run.
- If AutoProtect is enabled, it will automatically delete the files present on the drive.
otherwise, you will be asked if you want the suspected files deleted.
In general a USB drive of any kind, a pendrive, ipod, mp3 Players, mobile phones, all
may contain viruses, they just act as carriers, the viruses/trojans exploit the autorun.inf file
to execute themselves whenever you try to open the drive by double clicking. They even may
shadow the Open, Explore, Search, etc, other features using the shell commands.. like..
shell\Explore\command = virus.exe
Here, when you right click on the drive icon and click on Explore, virus.exe would be launched,
infecting the whole system, and then it will start spreading by any means possible...
- So, the program parses the autorun.inf for you and deletes the virus/trojans.
There is no use of the autorun.inf in you removable drive. (It is rarely used by some applications to
provide some added functionality, like the Wireless Config tool to help setup a home network.
But, the applications are limited.) , you can safely delete it.

2) Process viewer/killer.
  • It displays all the current running processes along with its user, memory usage and full path.
  • It is useful for finding and killing currently running processes (viruses/trojans).
  • To use the ‘Kill’ feature you need to enable the ‘Enable Advanced Options’ in the Settings Tab.

Screenshot: Process Viewer/Killer

3) Service viewer/controller.
  • It displays all the currently registered services along with its path, user, start mode, status and stopability.

  • It is useful for finding and killing currently running processes (viruses/trojans).
  • To use the ‘Start’ or ‘Stop’ feature you need to enable the ‘Enable Advanced Options’ in the Settings Tab.

Screenshot: Service Viewer/Controller

4) Power Tools.
  • Enables/Disables Registry Editor.
  • Enables/Disables Task Manager.
  • Enables/Disables Folder Options.
  • Fixes “Hide Files and Folder” options in Folder Options.
  • Enables/Disables autorun.inf functionality.
  • Restart Windows Explorer.
    • Make sure there are no pending ongoing file operations, as they will be prematurely terminated.
  • Forced Restart System.

Here:
* = Enabled
* = Disabled
 = Error / Default State [i.e. Registry Key does not exist.]

Screenshot: Settings

Updates
Updates for This Version 3.2.0
1) Added option to elevate in Vista and Windows 7.
2) Improved file deletion process.
3) Change in behavior:  Disabling ‘autorun’ also disables ‘autoplay’.
4) Improved file permission taking algorithm.
Updates for last Version 2.1.4
1) Added option for taking file permissions/ownership.
2) Added new options.
3) Added Process Viewer/Killer
4) Added Service Viewer/Controller
5) Added Power Tools
  • Toggle Registry Editor
  • Toggle Task Manager
  • Toggle Folder Options
  • Toggle autorun.inf functionality
  • Fix “Hide Files and Folder” options in Folder Options.
6) Added Program Update Feature.
Updates for last version 1.2.1
1) Stopped Scanning of A:\ and B:\ as this leads to excessive floppy drive access.
Updates for last version 1.2
1) Fixed excessive floppy drive scanning problems.
2) Fixed "array out of range" errors.



It requires Microsoft .NET Framework Version 2.0 Redistributable Package

Procedure to clearing the ConfigMgr (SCCM) client local cache (CCM cache) -Resolving Disk space isssue

Essentially the client cache is a temporary download location for software, applications and software updates that are deployed to a clie...