Difference between revisions of "Bering-uClibc 4.x - User Guide - Advanced Topics - Setting Up a File Server"

From bering-uClibc
Jump to: navigation, search
(Expanded NFS section)
(General tidy-up and more NAS and NFS content added)
Line 8: Line 8:
 
----
 
----
  
 
'''This material copied directly from http://leaf.sourceforge.net/doc/bucu-nas.html - needs to be checked/updated for Bering-uClibc 4.x!'''<br>[[User:Davidmbrooke|Davidmbrooke]] 21:01, 16 November 2010 (UTC)
 
  
 
==Introduction==
 
==Introduction==
This Howto provides a description about the packages and configurations to use [[Bering-uClibc 4.x]] as a NAS (Network-Attached Storage) and/or SAN (Storage Area Network) solution.
+
This page provides a description of the Packages, Modules and configurations to use [[Bering-uClibc 4.x]] as a NAS (Network-Attached Storage) and/or SAN (Storage Area Network) solution. The main differences between these two approaches are:
 +
* With a NAS solution, hard drives are formatted on the Bering-uClibc machine and mounted by the local OS then directories are shared out to (typically) multiple other machines across the network using file sharing protocols like CIFS and NFS.
 +
* With a SAN solution, hard drives or partitions are not mounted by the local OS but shared out at a low level to (typically) a single other machine across the network using a block-level protocol like iSCSI.
  
To be ready for this special usage, Bering-uClibc versions 3.0 and above have DMA enabled by default when available. This will speedup harddisk performance considerably.
+
To be ready for this special usage, Bering-uClibc versions 3.0 and above have DMA enabled by default when available. This will speed up hard disk performance considerably.
  
  
Line 21: Line 21:
 
===Base packages===
 
===Base packages===
 
The following basic packages are recommended to build a NAS or SAN solution:
 
The following basic packages are recommended to build a NAS or SAN solution:
 +
* <code class="filename">hdsupp.lrp</code> to set up and maintain hard disk installations
 +
* <code class="filename">mdadm.lrp</code> to manage Linux Software RAID arrays as described at http://raid.wiki.kernel.org/index.php/Linux_Raid
 +
* <code class="filename">smartd.lrp</code> to monitor the integrity of the hard disks as described at http://smartmontools.sourceforge.net/
  
{| class="simplelist" summary="Simple list"
+
The <code class="filename">hdsupp.lrp</code> Package currently contains the following programs:
| <code class="filename">hdsupp.lrp</code> to setup and maintain harddisks
+
<tt>badblocks</tt>,
|-
+
<tt>e2fsck</tt>,
| <code class="filename">hdspind.lrp</code> and hddma.lrp with some helper scripts.
+
<tt>e2label</tt>,
|-
+
<tt>fdisk</tt>,
|
+
<tt>hdparm</tt>,
<code class="filename">mdadm.lrp</code> to manage Linux Software RAID arrays ([http://www.xmlmind.com/xmleditor/namespace/clipboard%22%20%3E%3Culinhttp://www.tldp.org/HOWTO/Software-RAID-HOWTO.html http://www.tldp.org/HOWTO/Software-RAID-HOWTO.html])
+
<tt>syslinux</tt>,
|-
+
<tt>tune2fs</tt>,
|
+
<tt>dosfsck</tt>,
<code class="filename">smartd.lrp</code> to monitor the integrity of the harddisks (http://smartmontools.sourceforge.net/)
+
<tt>mke2fs</tt>,
|}
+
<tt>mkdosfs</tt>,
 +
<tt>mkfs.ext2</tt>,
 +
<tt>mkfs.ext3</tt>,
 +
<tt>mkfs.msdos</tt>,
 +
<tt>mkfs.vfat</tt>,
 +
<tt>mkswap</tt>,
 +
<tt>swapoff</tt>,
 +
<tt>swapon</tt>,
 +
<tt>fsck</tt>,
 +
<tt>fsck.ext2</tt>,
 +
<tt>fsck.ext3</tt>,
 +
<tt>fsck.msdos</tt>,
 +
<tt>fsck.vfat</tt> and
 +
<tt>losetup</tt>.
  
<code class="filename">hdsupp.lrp</code> currently contains the following programs: <span class="command">'''badblocks'''</span>, <span class="command">'''e2fsck'''</span>, <span class="command">'''e2label'''</span>, <span class="command">'''fdisk'''</span>, <span class="command">'''hdparm'''</span>, <span class="command">'''syslinux'''</span>, <span class="command">'''tune2fs'''</span>, <span class="command">'''dosfsck'''</span>, <span class="command">'''mke2fs'''</span>,  <span class="command">'''mkdosfs'''</span>, <span class="command">'''mkfs.ext2'''</span>, <span class="command">'''mkfs.ext3'''</span>, <span class="command">'''mkfs.      msdos'''</span>, <span class="command">'''mkfs.vfat'''</span>, <span class="command">'''mkswap'''</span>, <span class="command">'''swapoff'''</span>, <span                class="command">'''swapon'''</span>, <span class="command">'''fsck'''</span>, <span class="command">'''fsck.ext2'''</span>, <span class="command">'''fsck.ext3'''</span>, <span class="command">'''fsck.msdos'''</span>, <span class="command">'''fsck.vfat'''</span> and <span class="command">'''losetup'''</span>.
 
  
<code class="filename">hdspindn.lrp</code> is a simple script package which put the harddisk(s) in standby mode using hdparm which is available in the hdsupp package.
+
==Configuring a NAS solution==
 
+
For any NAS solution, the first step is to prepare the disk partitions and mount them on the local (Bering-uClibc) operating system.
<code class="filename">hddma.lrp</code> is a simple script package which forces harddisk(s) in dma mode when it's not automatically recognised or a when a specific controller kernel module is necessary. <code class="filename">hddma.lrp</code> also uses the hdparm program from the hdsupp package.
+
A typical procedure would be:
 +
# Use <tt>fdisk</tt> to configure the disk partitions.
 +
# Format the disk paritions, for example using <tt>mkfs.ext3</tt>.
 +
# Mount the disk partitions locally. This needs some further preparation:
 +
#* Mounting an <tt>ext3</tt>-formatted partition requires that the <code class="filename>ext3.ko</code> kernel Module is loaded. This is not a standard entry in <code class="filename">/lib/modules/</code> so it and its dependencies must be added from <code class="filename">modules.tgz</code>. Once present in <code class="filename">/lib/modules/</code> they will be loaded automatically when required. The required kernel Module files and their paths in <code class="filename">modules.tgz</code> are:
 +
#** <code class="filename">kernel/fs/ext3/ext3.ko</code>
 +
#** <code class="filename">kernel/fs/jbd/jbd.ko</code>
 +
#** <code class="filename">kernel/fs/mbcache.ko</code>
 +
#* The directory on which the disk partition is to be mounted must be created, and must be re-created on reboot. A good way to achieve this is to load the <code class="filename">local.lrp</code> Package and then add an entry to <code class="filename">/var/lib/lrpkg/local.local</code>. Since only Files, not Directories, can be specified in this file there needs to be a dummy file in the mount point directory. For example add the following to <code class="filename">/var/lib/lrpkg/local.local</code>:<pre>export/media/.mountpoint</pre>
 +
#* In order for the disk partition to be automatically mounted on reboot an entry should be added to <code class="filename">/etc/fstab</code>. Something like the following:<pre>/dev/sdb1    /export/media    ext3    defaults    1 1</pre>
  
  
==Configuring a NAS solution==
+
Other aspects of configuration procedure for a NAS server depend on which file sharing protocols will be used.
The configuration procedure for a NAS server depends on which file sharing protocols will be used.
+
 
With [[Bering-uClibc 4.x]] the supported protocols are:
 
With [[Bering-uClibc 4.x]] the supported protocols are:
 
* CIFS, also known as SMB, typically used by Microsoft Windows client machines.
 
* CIFS, also known as SMB, typically used by Microsoft Windows client machines.
 
* NFS, typically used by UNIX / Linux client machines.
 
* NFS, typically used by UNIX / Linux client machines.
 
* FTP
 
* FTP
 +
  
 
===CIFS/SMB===
 
===CIFS/SMB===
Line 56: Line 80:
 
====Firewall Settings for CIFS/SMB====
 
====Firewall Settings for CIFS/SMB====
 
SMB uses TCP ports 137 and 139 and UDP ports 137 and 138. Direct hosted "netbios less" SMB traffic uses port 445 UDP and TCP (Samba22 only)  
 
SMB uses TCP ports 137 and 139 and UDP ports 137 and 138. Direct hosted "netbios less" SMB traffic uses port 445 UDP and TCP (Samba22 only)  
 +
  
 
===NFS===
 
===NFS===
'''Note:''' NFS Server support is currently under development and not included in Bering-uClibc 4.0 Beta1 !
+
'''Note:''' NFS Server support is currently under development and not included in Bering-uClibc 4.0-beta1. It is planned to be added for -beta2.
  
While support for NFSv4 is included, only the AUTH_UNIX authentication option is currently supported. RPCSEC_GSS, typically configured to use Kerberos (KRB5), is not supported.
+
While support for NFSv4 is included, only the AUTH_UNIX authentication option is currently supported. RPCSEC_GSS, typically configured to use Kerberos (KRB5) authentication, is not supported.
  
====Packages and Modules for NFS====
+
====Packages for NFS====
 
NFS support is a little more complicated than the other protocols:
 
NFS support is a little more complicated than the other protocols:
* Most of the "work" is done by the Linux kernel, specifically kernel Module <code class="filename">nfsd.ko</code> (not to be confused with <code class="filename">nfs.ko</code> which implements NFS ''client'' functionality.)
+
* Most of the "work" is done by the Linux kernel.
 
* The kernel code is accessed via a set of user-space helper programs. These are provided by the <code class="filename">nfs-utils.lrp</code> Package.
 
* The kernel code is accessed via a set of user-space helper programs. These are provided by the <code class="filename">nfs-utils.lrp</code> Package.
 
** One of these helper programs, <code class="filename">rpc.idmapd</code>, relies on shared library <code class="filename">/usr/lib/libnfsidmap.so</code> from separate Package <code class="filename">libnfsidmap.lrp</code>, but <code class="filename">rpc.idmapd</code> is only required for NFS version 4.
 
** One of these helper programs, <code class="filename">rpc.idmapd</code>, relies on shared library <code class="filename">/usr/lib/libnfsidmap.so</code> from separate Package <code class="filename">libnfsidmap.lrp</code>, but <code class="filename">rpc.idmapd</code> is only required for NFS version 4.
* NFS protocol versions 2 and 3 rely on the ONCRPC "port mapper" daemon which is provided by the separate Package <code class="filename">portmap.lrp</code>.
+
* NFS relies on the ONCRPC "port mapper" daemon which is provided by the separate Package <code class="filename">portmap.lrp</code>.
** The port mapper is not used by clients when using NFS protocol version 4 only, but seems to still be required on the server.
+
** The port mapper is not used by clients when serving NFS protocol version 4 only, but is still required on the server.
 +
* Package <code class="filename">libevent.lrp</code> is also required by <code class="filename">rpc.idmapd</code>.
  
The <code class="filename">nfsd.ko</code> Module will be loaded automatically when required, ''provided that file and the files it depends on are present in directory <code class="filename">/lib/modules/</code>''. These files need to be extracted from <code class="filename">modules.tgz</code> and copied to <code class="filename">/lib/modules/</code> manually. The full set of kernel Module files and their paths in <code class="filename">modules.tgz</code> are:
+
====Modules for NFS====
 +
Kernel Module <code class="filename">nfsd.ko</code> (not to be confused with <code class="filename">nfs.ko</code> which implements NFS ''client'' functionality) is what does most of the processing for an NFS server.
 +
 
 +
The <code class="filename">nfsd.ko</code> Module will be loaded automatically when required, ''provided that file and the files it depends on are present in directory <code class="filename">/lib/modules/</code>''. These files need to be extracted from <code class="filename">modules.tgz</code> and copied to <code class="filename">/lib/modules/</code> manually. The required kernel Module files and their paths in <code class="filename">modules.tgz</code> are:
 
* <code class="filename">kernel/fs/exportfs/exportfs.ko</code>
 
* <code class="filename">kernel/fs/exportfs/exportfs.ko</code>
 
* <code class="filename">kernel/fs/lockd/lockd.ko</code>
 
* <code class="filename">kernel/fs/lockd/lockd.ko</code>
Line 76: Line 105:
 
* <code class="filename">kernel/net/sunrpc/sunrpc.ko</code>
 
* <code class="filename">kernel/net/sunrpc/sunrpc.ko</code>
 
* <code class="filename">kernel/net/sunrpc/auth_gss/auth_rpcgss.ko</code>
 
* <code class="filename">kernel/net/sunrpc/auth_gss/auth_rpcgss.ko</code>
 +
 +
====Configuration for NFS====
 +
TODO
  
 
====Firewall Settings for NFS====
 
====Firewall Settings for NFS====
Line 87: Line 119:
  
 
====Troubleshooting NFS====
 
====Troubleshooting NFS====
Directory <code class="filename">/proc/fs/nfsd/</code> contains files which report on the NFS server configuration and status, for example:
+
* Directory <code class="filename">/proc/fs/nfsd/</code> contains files which report on the NFS server configuration and status.For example, running command <tt>cat /proc/fs/nfsd/versions</tt> reports:<pre>+2 +3 -4 -4.1</pre>
cat /proc/fs/nfsd/versions
+
* The <code class="filename">portmap.lrp</code> Package includes the <tt>pmap_dump</tt> utility to report on the programs registered with the port mapper.
+2 +3 -4 -4.1
+
 
  
 
===FTP===
 
===FTP===
Line 100: Line 132:
  
 
==Configuring a SAN solution==
 
==Configuring a SAN solution==
 +
'''This material copied directly from http://leaf.sourceforge.net/doc/bucu-nas.html - needs to be checked/updated for Bering-uClibc 4.x!'''<br>[[User:Davidmbrooke|Davidmbrooke]] 21:01, 16 November 2010 (UTC)
 +
 
Internet SCSI (iSCSI) is an official standard ratified on February 11, 2003 by the Internet Engineering Task Force that allows the use of the SCSI protocol over TCP/IP networks. iSCSI is a transport layer protocol in the SCSI-3 specifications framework. An iSCSI target is the server piece of an iSCSI SAN. The client piece/driver is called "initiator".
 
Internet SCSI (iSCSI) is an official standard ratified on February 11, 2003 by the Internet Engineering Task Force that allows the use of the SCSI protocol over TCP/IP networks. iSCSI is a transport layer protocol in the SCSI-3 specifications framework. An iSCSI target is the server piece of an iSCSI SAN. The client piece/driver is called "initiator".
  

Revision as of 10:06, 28 December 2010

Advanced Topics - Setting Up a File Server
Prev Bering-uClibc 4.x - User Guide Next


Introduction

This page provides a description of the Packages, Modules and configurations to use Bering-uClibc 4.x as a NAS (Network-Attached Storage) and/or SAN (Storage Area Network) solution. The main differences between these two approaches are:

  • With a NAS solution, hard drives are formatted on the Bering-uClibc machine and mounted by the local OS then directories are shared out to (typically) multiple other machines across the network using file sharing protocols like CIFS and NFS.
  • With a SAN solution, hard drives or partitions are not mounted by the local OS but shared out at a low level to (typically) a single other machine across the network using a block-level protocol like iSCSI.

To be ready for this special usage, Bering-uClibc versions 3.0 and above have DMA enabled by default when available. This will speed up hard disk performance considerably.


Requirements

Base packages

The following basic packages are recommended to build a NAS or SAN solution:

The hdsupp.lrp Package currently contains the following programs: badblocks, e2fsck, e2label, fdisk, hdparm, syslinux, tune2fs, dosfsck, mke2fs, mkdosfs, mkfs.ext2, mkfs.ext3, mkfs.msdos, mkfs.vfat, mkswap, swapoff, swapon, fsck, fsck.ext2, fsck.ext3, fsck.msdos, fsck.vfat and losetup.


Configuring a NAS solution

For any NAS solution, the first step is to prepare the disk partitions and mount them on the local (Bering-uClibc) operating system. A typical procedure would be:

  1. Use fdisk to configure the disk partitions.
  2. Format the disk paritions, for example using mkfs.ext3.
  3. Mount the disk partitions locally. This needs some further preparation:
    • Mounting an ext3-formatted partition requires that the ext3.ko kernel Module is loaded. This is not a standard entry in /lib/modules/ so it and its dependencies must be added from modules.tgz. Once present in /lib/modules/ they will be loaded automatically when required. The required kernel Module files and their paths in modules.tgz are:
      • kernel/fs/ext3/ext3.ko
      • kernel/fs/jbd/jbd.ko
      • kernel/fs/mbcache.ko
    • The directory on which the disk partition is to be mounted must be created, and must be re-created on reboot. A good way to achieve this is to load the local.lrp Package and then add an entry to /var/lib/lrpkg/local.local. Since only Files, not Directories, can be specified in this file there needs to be a dummy file in the mount point directory. For example add the following to /var/lib/lrpkg/local.local:
      export/media/.mountpoint
    • In order for the disk partition to be automatically mounted on reboot an entry should be added to /etc/fstab. Something like the following:
      /dev/sdb1    /export/media    ext3    defaults    1 1


Other aspects of configuration procedure for a NAS server depend on which file sharing protocols will be used. With Bering-uClibc 4.x the supported protocols are:

  • CIFS, also known as SMB, typically used by Microsoft Windows client machines.
  • NFS, typically used by UNIX / Linux client machines.
  • FTP


CIFS/SMB

Packages for CIFS/SMB

Package samba.lrp (or samba22.lrp) is required for CIFS/SMB support.

The standard samba package is based on samba version 2.0.10a, is small and will do in most cases. The samba22 package is based on samba version 2.2.12, which has more options, but is also much bigger.

Firewall Settings for CIFS/SMB

SMB uses TCP ports 137 and 139 and UDP ports 137 and 138. Direct hosted "netbios less" SMB traffic uses port 445 UDP and TCP (Samba22 only)


NFS

Note: NFS Server support is currently under development and not included in Bering-uClibc 4.0-beta1. It is planned to be added for -beta2.

While support for NFSv4 is included, only the AUTH_UNIX authentication option is currently supported. RPCSEC_GSS, typically configured to use Kerberos (KRB5) authentication, is not supported.

Packages for NFS

NFS support is a little more complicated than the other protocols:

  • Most of the "work" is done by the Linux kernel.
  • The kernel code is accessed via a set of user-space helper programs. These are provided by the nfs-utils.lrp Package.
    • One of these helper programs, rpc.idmapd, relies on shared library /usr/lib/libnfsidmap.so from separate Package libnfsidmap.lrp, but rpc.idmapd is only required for NFS version 4.
  • NFS relies on the ONCRPC "port mapper" daemon which is provided by the separate Package portmap.lrp.
    • The port mapper is not used by clients when serving NFS protocol version 4 only, but is still required on the server.
  • Package libevent.lrp is also required by rpc.idmapd.

Modules for NFS

Kernel Module nfsd.ko (not to be confused with nfs.ko which implements NFS client functionality) is what does most of the processing for an NFS server.

The nfsd.ko Module will be loaded automatically when required, provided that file and the files it depends on are present in directory /lib/modules/. These files need to be extracted from modules.tgz and copied to /lib/modules/ manually. The required kernel Module files and their paths in modules.tgz are:

  • kernel/fs/exportfs/exportfs.ko
  • kernel/fs/lockd/lockd.ko
  • kernel/fs/nfsd/nfsd.ko
  • kernel/net/sunrpc/sunrpc.ko
  • kernel/net/sunrpc/auth_gss/auth_rpcgss.ko

Configuration for NFS

TODO

Firewall Settings for NFS

NFSv4 uses only TCP port 2049, for the NFS daemon.

NFSv3 uses:

  • UDP port 111 and TCP port 111 for the port mapper daemon.
  • UDP port 2049 and TCP port 2049 for the NFS daemon.
  • Dynamic port numbers for the mount daemon.
    • A fixed port number can be specified via the -p / --port command-line option to rpc.mountd.

Troubleshooting NFS

  • Directory /proc/fs/nfsd/ contains files which report on the NFS server configuration and status.For example, running command cat /proc/fs/nfsd/versions reports:
    +2 +3 -4 -4.1
  • The portmap.lrp Package includes the pmap_dump utility to report on the programs registered with the port mapper.


FTP

Packages for FTP

Package vsftpd.lrp implements a small and secure FTP daemon.

Firewall Settings for FTP

FTP uses TCP ports 20 and 21 (for passive ftp only port 21 is used).


Configuring a SAN solution

This material copied directly from http://leaf.sourceforge.net/doc/bucu-nas.html - needs to be checked/updated for Bering-uClibc 4.x!
Davidmbrooke 21:01, 16 November 2010 (UTC)

Internet SCSI (iSCSI) is an official standard ratified on February 11, 2003 by the Internet Engineering Task Force that allows the use of the SCSI protocol over TCP/IP networks. iSCSI is a transport layer protocol in the SCSI-3 specifications framework. An iSCSI target is the server piece of an iSCSI SAN. The client piece/driver is called "initiator".

The iSCSI protocol uses TCP/IP for its data transfer. Unlike other network storage protocols, such as Fibre Channel (which is the foundation of most SANs), it requires only the simple and ubiquitous Ethernet interface (or any other TCP/IP-capable network) to operate. This enables low-cost centralization of storage without all of the usual expense and incompatibility normally associated with Fibre Channel storage area networks.

The SAN solution uses iSCSI extensively and recommends:

iscsid.lrp - an iscsi target daemon.

The iscsi target daemon can be used, together with an iscsi initiator on a host, as a SAN solution. The iscsi target daemon supports block devices, regular files, LVM and RAID. It uses the following kernel modules which are available in the kernel tarball: iscsi_trgt.o and fileio.o.

Configuration

An example iscsi target configuration (more information in the ietd.conf file) where the block device /dev/hdc is used:

 ############################################################
 User joe secret
 # Targets definitions start with "Target" and the target name.
 # The target name must be a globally unique name, the iSCSI
 # standard defines the "iSCSI Qualified Name" as follows:
 #
 # iqn.yyyy-mm.<reversed domain name>[:identifier]
 #
 # "yyyy-mm" is the date at which the domain is valid and the identifier
 # is freely selectable. For further details please check the iSCSI spec.
 Target iqn.2006-08.network.private:storage.disk1
 # Users, who can access this target
 # (no users means anyone can access the target)
 User joe secret
 # Lun definition
 Lun 0 /dev/hdc fileio
 ############################################################

To use a regular file, it first has to be created on the target disk with dd (this example assumes you mounted the harddisk under /home):

dd if=/dev/zero of=/home/nas.img bs=4k count=<some very big number>

Where the resulting size is count*bs.

The Lun definition would look like this:

Lun 0 /home/nas.img fileio

Firewall settings

If you run a firewall on the SAN box, you need to open tcp port 3260.


Links

Some iscsi clients (initiators): Windows:

http://www.microsoft.com/windowsserver2003/technologies/storage/iscsi/default.mspx

Linux:

http://unh-iscsi.sourceforge.net

http://linux-iscsi.sourceforge.net

http://www.open-iscsi.org

Other iscsi links:

http://cuddletech.com/articles/iscsi/index.html

Additional information:

Network Attached Storage on wikipedia



Prev Up Next