Bering-uClibc 4.x - User Guide - Advanced Topics - Setting Up a File Server

From bering-uClibc
Revision as of 09:06, 31 May 2012 by Davidmbrooke (Talk | contribs) (AFP: Extra notes for Netatalk (AFP))

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Advanced Topics - Setting Up a File Server
Prev Bering-uClibc 4.x - User Guide Next


This page provides a description of the Packages, Modules and configurations to use Bering-uClibc 4.x as a NAS (Network-Attached Storage) and/or SAN (Storage Area Network) solution. The main differences between these two approaches are:

  • With a NAS solution, hard drives are formatted on the Bering-uClibc machine and mounted by the local OS then directories are shared out to (typically) multiple other machines across the network using file sharing protocols like CIFS and NFS.
  • With a SAN solution, hard drives or partitions are not mounted by the local OS but shared out at a low level to (typically) a single other machine across the network using a block-level protocol like iSCSI.

To be ready for this special usage, Bering-uClibc versions 3.0 and above have DMA enabled by default when available. This will speed up hard disk performance considerably.


Assuming that hard drives are being used for data storage the following basic packages are recommended to build a NAS or SAN solution:

The hdsupp.lrp Package currently contains the following programs: badblocks, e2fsck, e2label, fdisk, hdparm, syslinux, tune2fs, dosfsck, mke2fs, mkdosfs, mkfs.ext2, mkfs.ext3, mkfs.msdos, mkfs.vfat, mkswap, swapoff, swapon, fsck, fsck.ext2, fsck.ext3, fsck.msdos, fsck.vfat and losetup.

Note that the use of hard drive storage is not mandatory. It is practical to store a few MB of data in a RAM Disk, so if a LEAF machine is simply serving out small files for PXE booting other machines this can be done without additional disk hardware.

Configuring a NAS solution

For most NAS solutions (assuming hard drives are being used) the first step is to prepare the disk partitions and mount them on the local Bering-uClibc 4.x operating system. A typical procedure would be:

  1. Use fdisk to configure the disk partitions.
  2. Format the disk partitions, for example using mkfs.ext3.
  3. Mount the disk partitions locally. This needs some further preparation:
    • Mounting an ext3-formatted partition requires that the ext3.ko kernel Module is loaded. This is not a standard entry in /lib/modules/ so it and its dependencies must be added from modules.tgz. Once present in /lib/modules/ they will be loaded automatically when required. The required kernel Module files and their paths in modules.tgz are:
      • kernel/fs/ext3/ext3.ko
      • kernel/fs/jbd/jbd.ko
      • kernel/fs/mbcache.ko
    • The directory on which the disk partition is to be mounted must be created, and must be re-created on reboot. A good way to achieve this is to load the local.lrp Package and then add an entry to /var/lib/lrpkg/local.local. Since only Files, not Directories, can be specified in this file there needs to be a dummy file in the mount point directory. For example add the following to /var/lib/lrpkg/local.local:
      Note that this file must be visible whenever you run "Save configuration" from the LRP menu, and the file in the mountpoint directory will be hidden when the other disk is mounted over it. A workaround is to also create a file with exactly the same name in the root directory of the mounted disk.
    • In order for the disk partition to be automatically mounted on reboot an entry should be added to /etc/fstab. Something like the following:
      /dev/sdb1    /export/media    ext3    defaults    1 1

Other aspects of configuration procedure for a NAS server depend on which file sharing protocols will be used. With Bering-uClibc 4.x the supported protocols are:

  • CIFS, also known as SMB, typically used by Microsoft Windows client machines.
  • NFS, typically used by UNIX / Linux client machines.
  • AFP, typically used by Apple Mac OS X client machines.
  • FTP.
  • TFTP.



The industry-standard CIFS/SMB server solution for UNIX operating systems is Samba.

In early Bering-uClibc 4.x releases samba.lrp contained version 2.0.10 of the Samba software and samba22.lrp was also offered as an alternative, containing version 2.2.12. These are both very old: version 2.2.12 was released 29 September 2004.

As of Bering-uClibc 4.2.1 the samba.lrp Package is upgraded to the latest upstream version (3.6.4). The alternative Package samba22.lrp is retained for anyone needing to run Samba 2.x but this will be retired for Bering-uClibc 5.x.

Packages for CIFS/SMB

Package samba.lrp is required to implement CIFS/SMB server support. The following Packages are pre-requisites for samba.lrp:

  • libpopt.lrp
  • libz.lrp
  • libtalloc.lrp
  • libtdb.lrp

Some optional Packages are also available:

  • samba-swat.lrp contains the Samba Web Administration Tool
    • In order to enable SWAT add the following line to /etc/inetd.conf and restart inetd:
      swat stream tcp nowait root /usr/sbin/swat swat
  • samba-util.lrp contains additional Samba server utilities

Modules for CIFS/SMB

No specific Modules required.

Firewall Settings for CIFS/SMB

SMB uses TCP ports 137 and 139 and UDP ports 137 and 138. Direct hosted "netbios less" SMB traffic uses port 445 UDP and TCP.

SWAT uses TCP port 901.

Further Reading for SAMBA


Note: While support for NFSv4 is included, only the AUTH_UNIX authentication option is currently supported. RPCSEC_GSS, typically configured to use Kerberos (KRB5) authentication, is not currently supported.

Packages for NFS

NFS support is a little more complicated than the other protocols:

  • Most of the "work" is done by the Linux kernel.
  • The kernel code is accessed via a set of user-space helper programs. These are provided by the nfs-utils.lrp Package.
    • One of these helper programs, rpc.idmapd, relies on shared library /usr/lib/ from separate Package libnfsidmap.lrp, but rpc.idmapd is only required for NFS version 4.
  • NFS relies on the ONCRPC "port mapper" daemon which is provided by the separate Package portmap.lrp.
    • The port mapper is not used by clients when serving NFS protocol version 4 only, but is still required on the server.
  • Package libevent.lrp is also required by rpc.idmapd.

Modules for NFS

Kernel Module nfsd.ko (not to be confused with nfs.ko which implements NFS client functionality) is what does most of the processing for an NFS server.

The nfsd.ko Module will be loaded automatically when required, provided that file and the files it depends on are present in directory /lib/modules/. These files need to be extracted from modules.tgz and copied to /lib/modules/ manually. The required kernel Module files and their paths in modules.tgz are:

  • kernel/fs/exportfs/exportfs.ko
  • kernel/fs/lockd/lockd.ko
  • kernel/fs/nfsd/nfsd.ko
  • kernel/net/sunrpc/sunrpc.ko
  • kernel/net/sunrpc/auth_gss/auth_rpcgss.ko

Configuration for NFS


Firewall Settings for NFS

NFSv4 uses only TCP port 2049, for the NFS daemon.

NFSv3 uses:

  • UDP port 111 and TCP port 111 for the port mapper daemon.
  • UDP port 2049 and TCP port 2049 for the NFS daemon.
  • Dynamic port numbers for the mount daemon.
    • A fixed port number can be specified via the -p / --port command-line option to rpc.mountd.

Note too that portmap and the nfs-utils daemons are controlled by TCP Wrappers, or in other words files /etc/hosts.allow and /etc/hosts.deny.

Troubleshooting NFS

  • Directory /proc/fs/nfsd/ contains files which report on the NFS server configuration and status.For example, running command cat /proc/fs/nfsd/versions reports:
    +2 +3 -4 -4.1
  • The portmap.lrp Package includes the pmap_dump utility to report on the programs registered with the port mapper.


Packages for AFP

Package netatalk.lrp implements the standard Open Source AFP daemon. Only AFP over TCP is supported, not AFP over the AppleTalk network protocol.

There are many pre-requisite Packages for netatalk.lrp but since AFP support is introduced at the same time as Package Dependency Auto-Loading all of the dependencies should be loaded automatically, as long as the relevant .lrp files can be located in "PKGPATH".

The netatalk Package is compiled with support for Zeroconf and will advertise the configured AFP shares if avahid.lrp is also installed. While libavahi.lrp is specified as a Package Dependency the Avahi Daemon is not mandatory so must be specified separately if required. If this is not installed there will be some warnings about failure to contact the Avahi Daemon, but these can be ignored if Zeroconf functionality is not required.

Modules for AFP

No specific kernel Modules required.

Configuration for AFP

TCP Wrappers

Note that the netatalk Package is built with TCP Wrappers support. By default, the contents of /etc/hosts.allow specify that clients on the default internal network can connect, but error messages (in /var/log/daemon.log) like the following normally imply that TCP Wrappers is blocking access:

May 22 20:27:28 myserver afpd[5122]: refused connect from ::ffff:
May 22 20:27:28 myserver afpd[5122]: dsi_getsess: Connection refused
May 22 20:27:28 myserver afpd[5122]: dsi_start: session error: Connection refused
User Authentication Configuation

The netatalk Package authenticates users against the standard /etc/passwd and /etc/shadow files on the Bering-uClibc machine.

In principle netatalk can be configured to use other sources of user authentication (such as an LDAP repository) but this has not been enabled for the initial release. Please raise an Enhancement Request using the LEAF Trac system if you would like this functionality to be added.

File Share Configuration

The primary configuration file for the netatalk Package is /etc/netatalk/AppleVolumes.default. The version included in netatalk.lrp is just the standard one shipped as part of the netatalk distribution. Refer to the comments in that file or to the standard netatalk documentation for more details. Note the '~' symbol on a line by itself near the end of the file; this enables the sharing of users' home directories by default.


Packages for FTP

Package vsftpd.lrp implements a small and secure FTP daemon.

Firewall Settings for FTP

FTP uses TCP ports 20 and 21 (for passive ftp only port 21 is used).


The Trivial File Transfer Protocol, TFTP, is not a mainstream NAS protocol but a TFTP server is a file server so this is the best place to describe how to configure Bering-uClibc 4.x as a TFTP server.

TFTP is most commonly used to support PXE network booting, where TFTP is the only protocol supported. Other uses for TFTP include downloading configuration and firmware update files to "embedded" devices such as VOIP phones and wireless access points.

Packages for TFTP

The dnsmasq.lrp Package has its "internal" TFTP server option switched off at compile time for Bering-uClibc 4.x so a separate TFTP server program is required.

The tftpd.lrp Package implements the industry-standard "tftp-hpa" TFTP server by H. Peter Anvin of SYSLINUX fame.

TFTP Server Configuration

The tftpd.lrp Package has no configuration files and hence no entry in the LEAF configuration menu structure. In fact, the Package only contains two items:

  • The /tftpboot/ directory.
  • The /usr/sbin/in.tftpd executable.

In order to enable the operation of the TFTP server file /etc/inetd.conf must be edited (this is entry 5 "superserver daemon configuration" in the Network configuration menu). The configuration line for "tftp" is already present but is commented out. Remove the leading '#' so that the line reads as follows:

tftp           dgram   udp     wait    root    /usr/sbin/in.tftpd      in.tftpd -s /tftpboot

Then restart the INET daemon:

svi inetd restart

Note: The "-s /tftpboot" means that the TFTP server regards directory /tftpboot/ as the "root" of its directory structure. Any references to filenames from TFTP clients need to omit this part of the filename. In other words, when a client asks for file gpxelinux.0 (or /gpxelinux.0) it is actually delivered file /tftpboot/gpxelinux.0 instead.

Of course, you will also want to add some files to the /tftpboot/ directory. If you have a lot of files to store / serve you will probably want to mount a hard drive partition on this directory, as described elsewhere on this page. On the other hand, if you only have a few small files you can simply place them in this directory on the RAM Disk. Note that such files will NOT be backed up automatically when the Bering-uClibc 4.x configuration is saved. The solution is to install the local.lrp Package and to list files under the /tftpboot/ directory in configuration file /var/lib/lrpkg/local.local. For example:


Firewall Settings for TFTP

TFTP uses UDP port 69.

There are two NetFilter "helper" Modules which support the handling of TFTP traffic: nf_conntrack_tftp.ko and nf_nat_tftp.ko. The latter is only necessary when NATing TFTP traffic. Both modules are specified in /usr/share/shorewall/modules and hence loaded by default when Shorewall is started.

Note: With Bering-uClibc 4.0 the entries in /usr/share/shorewall/modules are not loaded automatically. Either specify the required Module(s) in /etc/modules or refer to the Shorewall page for details of how to enable the processing of the Shorewall modules file.

Note that the TFTP server is controlled by the TCP Wrappers code so /etc/hosts.allow and /etc/hosts.deny must also be configured to permit access.

Configuring a SAN solution

This material copied directly from - needs to be checked/updated for Bering-uClibc 4.x!
Davidmbrooke 21:01, 16 November 2010 (UTC)

Internet SCSI (iSCSI) is an official standard ratified on February 11, 2003 by the Internet Engineering Task Force that allows the use of the SCSI protocol over TCP/IP networks. iSCSI is a transport layer protocol in the SCSI-3 specifications framework. An iSCSI target is the server piece of an iSCSI SAN. The client piece/driver is called "initiator".

The iSCSI protocol uses TCP/IP for its data transfer. Unlike other network storage protocols, such as Fibre Channel (which is the foundation of most SANs), it requires only the simple and ubiquitous Ethernet interface (or any other TCP/IP-capable network) to operate. This enables low-cost centralization of storage without all of the usual expense and incompatibility normally associated with Fibre Channel storage area networks.

The SAN solution uses iSCSI extensively and recommends:

iscsid.lrp - an iscsi target daemon.

The iscsi target daemon can be used, together with an iscsi initiator on a host, as a SAN solution. The iscsi target daemon supports block devices, regular files, LVM and RAID. It uses the following kernel modules which are available in the kernel tarball: iscsi_trgt.o and fileio.o.


An example iscsi target configuration (more information in the ietd.conf file) where the block device /dev/hdc is used:

 User joe secret
 # Targets definitions start with "Target" and the target name.
 # The target name must be a globally unique name, the iSCSI
 # standard defines the "iSCSI Qualified Name" as follows:
 # iqn.yyyy-mm.<reversed domain name>[:identifier]
 # "yyyy-mm" is the date at which the domain is valid and the identifier
 # is freely selectable. For further details please check the iSCSI spec.
 # Users, who can access this target
 # (no users means anyone can access the target)
 User joe secret
 # Lun definition
 Lun 0 /dev/hdc fileio

To use a regular file, it first has to be created on the target disk with dd (this example assumes you mounted the harddisk under /home):

dd if=/dev/zero of=/home/nas.img bs=4k count=<some very big number>

Where the resulting size is count*bs.

The Lun definition would look like this:

Lun 0 /home/nas.img fileio

Firewall settings

If you run a firewall on the SAN box, you need to open tcp port 3260.


Some iscsi clients (initiators): Windows:


Other iscsi links:

Additional information:

Network Attached Storage on wikipedia

Prev Up Next