Linux Help
guides forums blogs
Home Desktops Distributions ISO Images Logos Newbies Reviews Software Support & Resources Linuxhelp Wiki

Welcome Guest ( Log In | Register )



Advanced DNS Management
New ZoneEdit. New Managment.

FREE DNS Is Back

Sign Up Now
 
Reply to this topicStart new topic
> Unable to replace drive in software RAID-5 array (Fedora 14)
Tamater
post Nov 18 2011, 01:42 PM
Post #1


Whats this Lie-nix Thing?
*

Group: Members
Posts: 1
Joined: 18-November 11
Member No.: 15,698



I'm new to managing a RAID-5 array here at work. This is now the second drive that has failed on us out of a batch of 4 in less than a year and we're quite unhappy about it (looking at you, WD). I can't seem to get the new drive to be accepted into the RAID.

Here's my setup:

OS: Fedora 14
Drives: 1 System drive, 3 exisiting 2TB drives in RAID-5 (of the original 4) All SATA


So, I removed the old failing drive (it has been sent off to WD) and inserted my new drive.

Upon boot, I loaded up palimpsest and clicked on the RAID listed on the left:

State: Not running, partially assembled
Components: 4
in the red "Volume" bar graphic it says "RAID Array is not running" however, the "Stop RAID Array" button is enabled. It also says "/dev/md127" in the title bar.

When I click Edit Components, and Add Spare, it allows me to add the new drive.
Then, back on the main palimpsest screen, it still shows the RAID as not running....I click Start Array and gives the following error:

Error assembling array: mdadm exited with exit code 1: mdadm: failed to RUN_ARRAY /dev/md0: Input/output error
mdadm: Not enough devices to start the array.

This error occurs whether I add the new drive or not.


[aside] Is this normal that it automounts the RAID to md127 on boot, but tries to be md0 after stopping then restarting?


Also, when I do a /sbin/mdadm --detail /dev/md127, I get the following:


[root@spectrum ~]# /sbin/mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Mon May 9 11:46:07 2011
Raid Level : raid5
Used Dev Size : 1953510400 (1863.01 GiB 2000.39 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Nov 15 15:55:43 2011
State : active, FAILED, Not Started
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : :RAID
UUID : 68fba474:da41eab6:7071f9fc:baa56434
Events : 56793

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1
2 0 0 2 removed
3 0 0 3 removed



It's as if it is saying there are only 2 drives connected when I definitely have the original 3 (plus the new one).



How do I get this sorted out? I am not familiar with using mdadm or any other text-based admin of RAID..... These drives contain important medical research data backups and I absolutely must get it back. (some stuff on it has no other backups I've now realized)

Thanks in advance,

Matt
Go to the top of the page
 
+Quote Post

Reply to this topicStart new topic
1 User(s) are reading this topic (1 Guests and 0 Anonymous Users)
0 Members:

 



RSS Lo-Fi Version Time is now: 20th October 2017 - 05:49 AM