Raid or second drive as backup?

ozgurerdogan

Verified User
Joined
Apr 20, 2008
Messages
352
I am building a new box but just can not decide if I should make it raid 1 or just use second disk as a backup disk and let DA backup to there?
I heard having raid1 is not always optimal solution as could make more often trouble than usual sataII usage.
So what would you recommend?
THanks
 
I am building a new box but just can not decide if I should make it raid 1 or just use second disk as a backup disk and let DA backup to there?
I heard having raid1 is not always optimal solution as could make more often trouble than usual sataII usage.
So what would you recommend?
THanks

Hardware RAID 1 coupled with an offline backup service via rsync would be optimal, though software RAID 1 is better than none. The problem with using the second HDD as a non-RAID backup is that if your main drive dies then you will have to restore everything manually. With RAID 1 when either drive dies you keep running, then replace the bad drive and rebuild the array while you're still serving web pages.
 
ok thanks for answer but one thing. I am reading on some of posts something like "both harddrives may not have boot files" for raid systems . What does that mean? Can I safely remove bad one and replace it without any more effort in linux command promt? I think many people are using raid1, so Iwill probably choose that. I am a windows guy so this is my first linux box.
Thanks
 
Your data center can certainly set mirrored drives up for you, yes.
 
Hardware RAID 1 coupled with an offline backup service via rsync would be optimal, though software RAID 1 is better than none.
Most of us in the Linux world believe that software RAID is better than hardware RAID; it is certainly better than many so-called hardware RAID controllers we call FRAID (for "fake raid").

Bottom line is if you need a driver to make your RAID controller work it's not real hardware RAID; it's just an interface to the kernel. A real RAID controller will do it all, and pass the drive information on to the controller as if it were one drive. Therefore it doesn't require any drivers.

Jeff
 
ok thanks for answer but one thing. I am reading on some of posts something like "both harddrives may not have boot files" for raid systems . What does that mean?
If you use real hardware RAID (see my previous reply in this thread), then it's up to the RAID controller to manage; the boot files (which are ordinary files called for as part of your startup sequence) should be fine.

If you're using FRAID I have no idea; I've never used FRAID.

If you use software RAID the problem is that by default linux GRUB doesn't figure properly to look at the second drive if the first fails. There is information on the 'net which will tell you how to configure GRUB if you want to risk it. We find that we can't just give instructions because it's subtly different from server to server.
Can I safely remove bad one and replace it without any more effort in linux command promt?
The problem (see above) comes when you try to reboot. You can fix GRUB configuration so it will reboot (we recommend trying it before depending on it), or you can have someone do it for you.

Jeff
 
ok it is still in test enviroment. So can I test raid like so:
install centos and install something else like webmin, and halt the box and unplug on of drives and start it up and see if all goes well? and halt again and plug it back. I am using software raid that is integrated on mainboard.
 
First of all: if you're using a RAID controller integrated on the mainboard (motherboard) to run RAID, then it's most likely FRAID. There's no special controller necessary for software RAID; it's all done in the kernel.

If the result of:
Code:
$ df -h
shows drives named starting with /dev/md and similar, starting with md after the second slash then the system is using software RAID. Anything else (often starting with /dev/sda and the system is using some kind of hardware RAID. Built-on RAID controllers are almost always FRAID.

That would be a good test. You don't even have to install anything else.

Jeff
 
Ok thanks for important infos. I will see in couple of hours and post here again. BTW Does DA also support latest release of centos 5.2? I do not see it in DA system req. page.
 
If I make raid1 with integrated raid on mainboard, after selecting keyboard layout I am getting following error;
glibc detected /usr/bin/python: malloc(): memory corruption: 0x09a3bee0

But if I set the disks to non-raid so sata, I can install centos.

Is there anything I can do? Or I will have to install with a non raid setup.
 
Are you posting that you get that error during the OS install? If so, then you've probably either got a defective motherboard, or the built-in RAID doesn't support the OS Distribution you're trying to install.

I forgot to mention previously; if you're going to do that test with one drive pulled, be sure to do it twice, once for ech drive.

If I were you I'd set the disks to non-RAID, then install software RAID when configuring the drives during OS Install.

Jeff
 
Hi, first of all thank you for your help, it is really not possible to find anything on net about that error, some site says it might be a bug in centos.
Main board is brand new and I do not think it is a defective motherboard, but maybe motherboard does not support linux. It is GİGABYTE EP35C-DS3R S/L DDR2+DDR3 PCI-E 1600 LGA

Ok you say :
If I were you I'd set the disks to non-RAID, then install software RAID when configuring the drives during OS Install.
For this method, I will need raid driver disk for centos right? But what is the difference than between raid that I do in Bios and method that you say?

Thanks
 
You will NOT need any driver for software RAID for linux; it's built into the kernel.

When you're building select custom drive configuration and set up software RAID.

There's a linux How-To here; it's old but still useful.

To answer your other questions: we don't recommend RAID code; we believe that software RAID is better on Linux.

We don't recommend specific motherboards as we don't build servers from scratch; we use SuperMicro servers; they have distributors worldwide.

Jeff
 
ok so there are various raid configrations:
Motherboard builtin raid controller, so software raid (fraid),
Raid with a real raid card (hardware raid),
Software raid with linux by choosing custom configration.

So if I choose software raid with linux, do I have any issues later when one of drives fails and needs to be replaced. Actually I decided to make second drive as a backup drive, as I am advised to use so as it will be more flexible in case of reinstalling OS.

And make the layout so, maybe you can also please advise me on that;
This server will be a web dns mail mysql (hosting) server in a datacenter.
As a windows guy, I am still trying to figure out these partition kinds. And I looked at Installation Requirements page and created a new layout.
I have two 750 GB sataII disks and 4 gb ram;

/dev/hda
/dev/hda1 /boot /ext3 format(yes) 141 MB
/dev/hda3 /var /ext3 format(yes) 100 GB
/dev/hda5 /usr /ext3 format(yes) 20 GB
/dev/hda6 / /ext3 format(yes) 10 GB
/dev/hda7 /tmp /ext3 format(yes) 10 GB
/dev/hda8 swap format(yes) 8 GB
/dev/hda2 /home format(yes) 550 GB
/dev/hdc >>> This is 2nd hdd and I will mount it to use as backup disk.
 
The following is my opinion only:

/boot: never needs to be larger than 1G; ours are about 100M
/var & /usr: swap them; var never needs to be larger than 20G (as long as you're using Maildir, which you should), and /usr probably won't need to be 100G but you may want it over 20G.
/swap: we wouldn't use over the amount of memory; old recommendations to use twice memory are outdated.

Jeff
 
Ok this makes me confusing because DA says;
/boot 40 meg
swap 2 x memory
/tmp 1 Gig. Highly recommended to mount /tmp with noexec,nosuid in /etc/fstab
/ 6-10 Gig
/var 8-20 gig. Emails, logs and databases stored here
/usr 5-12 gig. Just DA data, source code, frontpage.
/home rest of drive. Roughly 80% for user data.

My boot is not 1G it is only 140 mb instead.
So if /var will collect email, logs, databases, It must be large like 100gb on a 750 gb harddisk. I imagine many people will use mail account so why only 20 gb? or am I missing something here? I am planing to service mail hosting to many people.

So you do not recommend making swap 2 x total memory as DA does?
thx
 
I do not. I've had both public and private discussions with kernel authors and maintaners, but it's my own belief.

Why? Because by the time you're using more than a half gig or so of swap your server is running so slow you'll need to find out why.

I somehow misread your /boot numbers; there's nothing at all wrong with 141M.

As for /var, I should have been more clear, size them for your maximum need, keeping in mind that databases are kept in /var.

As for /usr: that's all DirectAdmin uses them for but as/if you add other programs, they'll no doubt end up there.

As I posted, I generally use about 140M in /boot.

/var doesn't collect email if you're using Maildir. If you are, the email ends up in /home.

If you're planning on mail hosting I'd recommend using Maildir; mbox just isn't suitable for a lot of large mailboxes being accessed frequently.

Jeff
 
ok thanx for your time I ill get to datacenter and have a lesson there from the staff. Because everbody says something different. For example datacenter suggests using default layout is ok. And I will all the question to them face to face :)
Thanks a lot.
 
Back
Top