Test SAN is UP & running.. now hope it doesn’t fall again….

well, after the grueling hrs of all attempt proved futile (this is what happens when you don’t have the knowledge about the platform/OS/technology), googled for some solaris networking commands.

having collected some basic solaris network configuring commands(for time being, i don’t want to elaborate what happened) but the following commands gave life to the network which was killed or probably i don’t know what had happened.


ifconfig ntxn3 unplumb

ifconfig ntxn3 plumb

ifconfig ntxn3 x.x.x.x netmask

route add default x.x.x.x


ntxn3 is the network port identified by #nexentastor (opensolaris)

all i was worried about was about the pool data.. the pool is healthy and the data intact… though it is test environment and the data also was test data, still generating the same environment when something goes wrong in test environment is cumbersome (part of IT  tech field).. non-technical persons/managers thinks that its vegetable business….. when its rots throw away, but in IT it has to be fixed.. as long as it is recoverable.

date change on nexentastor rendered it unreachable..

Am now testing NexentaStor on HP ML370 G6 tower, here are the specs, Dual Socket, Quad Core, HT enabled Intel(R) Xeon(R) CPU E5540 @ 2.53GHz 8 Gig ram (4g populated for each proc) L3 cache 8 mb 2 x 73 gb SAS (15k) RAID-1= System OS 6 x 300 gb SAS (10k) RAID-0 single disk in each array to expose the disk for NexentaStor = Storage (RAID 10 = approx 838 gb pool) M410i raid card 256 mb Quad port Multifunction 1GbE card (identified as ntxn0..3 in Solaris (NexentaStor))

NexentaStor 3.0.4 the installation went fine and i even created a volume of 200 gb NFS attached it to the Test Server (XenServer 5.6) copied existing vm from XS local Storage to NFS Storage. configured smtp server for fault alerts.

everything was working fine.

after about 15+ hrs of uptime, i received the following fault alert email notification:

Subject: [NMS Report] NOTICE: host nssan

FAULT: **********************************************************************

FAULT: Appliance   : nssan (OS v3.0.4, NMS v3.0.4 (r8917))


FAULT: Primary MAC : 18:a9:5:6e:a1:db

FAULT: Time        : Thu Sep 30 00:00:34 2010

FAULT: Trigger     : runners-check

FAULT: Fault Type  : ALARM

FAULT: Fault ID    : 20

FAULT: Fault Count : 2

FAULT: Severity    : NOTICE

FAULT: Action      : Administrative action required to clear the original

FAULT:             : fault that has caused ‘nms-check’ to go into

FAULT:             : maintenance. Once cleared, run ‘setup trigger nms-check

FAULT:             : clear-faults’ to clear the faults and re-enable

FAULT:             : ‘nms-check’. If the problem does not appear to be an

FAULT:             : actual fault condition, use ‘setup trigger nms-check’ to

FAULT:             : tune-up the fault trigger’s properties. See NexentaStor

FAULT:             : User Guide at http://www.nexenta.com/docs for more

FAULT:             : information.

FAULT: Description : Runner nms-check went into maintenance state

FAULT: **********************************************************************



! For more detais on this trigger click on link below:

! http://x.x.x.x:2000/data/runners?selected_runner=runners-check


Runner nms-check (description: “Track NMS connectivity failures and internal errors”) went into maintenance state

Before, i could follow the suggestion in the alert i noticed (from the report) that the DATE on the server is old. So i changed the date from console in the recommended format (date -s “20 dec 2010 00:00:00:) then after pressing enter… the prompt took pretty long time to return. and the web console was not responding….

issue: i can ping the host ip locally from the console but not the gateway

1.suspecting the switch port, i connected laptop (windows xp) on the same port and tested it, there does not seems to be any issue with the switch port since i was able to ping the gateway.

2.suspecting the network card: rebooted the server with ubuntu 9.10 x64 live CD configured the network settings, here also there seems to be no issue with the ethernet card

some info: in linux the ethernet driver loaded is netxen_nic drivers, under nexentastor it ifconfig -a shows ntxn0 to ntxn3

the gruelling hrs of all attempt proved futile.. i dont have networking knowledge of solaris platform. linux wether its rpm or debian based distro am quite comfortable…

will update when its fixed…

Insane Massive IO

Intel’s upcoming Sandy Bridge (is the codename for  the processor micro-architecture developed by Intel as a successor to  Nehalem)

a single socket of Sandy Bridge is going to have 40 PCIe lanes
each lane supports Gb/s IO bandwidth
so single socket = 40 Gb/s IO bandwidth
imagine 4 Socket = 160 Gb/s IO bandwidth

really insane massive IO bandwidth…..

for more information check this and this and also this

HP Servers Specs


HP Proliant ML370 G6





Intel Xeon 2.53 GHz (Quad-Core, Hyper-Threading), 8 MB L3 Cache

Intel Xeon 2.53 GHz (Quad-Core, Hyper-Threading), 8 MB L3 Cache


8 GB


4 x 300 GB SAS 10k 2.5in

Optical Drive                     


Raid Controller                 

HP  Smart Array P410i 256 MB


Integrated multi-functioning 4 gbE ports

Power Supply                   

750 w Redundant