Skip to content

Installing and Maintaining XRootD

XRootD is a hierarchical storage system that can be used in a variety of ways to access data, typically distributed among actual storage resources. One way to use XRootD is to have it refer to many data resources at a single site, and another way to use it is to refer to many storage systems, most likely distributed among sites. An XRootD system includes a redirector, which accepts requests for data and finds a storage repository — locally or otherwise — that can provide the data to the requestor.

Use this page to learn how to install, configure, and use an XRootD redirector as part of a Storage Element (SE) or as part of a global namespace.

Before Starting

Before starting the installation process, consider the following points:

  • User IDs: If it does not exist already, the installation will create the Linux user ID xrootd
  • Service certificate: The XRootD service uses a host certificate at /etc/grid-security/host*.pem
  • Networking: The XRootD service uses port 1094 by default

As with all OSG software installations, there are some one-time (per host) steps to prepare in advance:

Installing an XRootD Server

An installation of the XRootD server consists of the server itself and its dependencies. Install these with Yum:

[email protected] # yum install xrootd

Configuring an XRootD Server

Minimal configuration

A new installation of XRootD is already configured to run a standalone server that serves files from /tmp on the local file system. This configuration is useful to verify basic connectivity between your clients and your server. To do this, start the xrootd service with standalone config as described in the managing services section.

You should be able now to copy a file such as /bin/sh using xrdcp command into /tmp. To test, do:

[email protected] # yum install xrootd-client
[email protected] # xrdcp /bin/sh root://localhost:1094//tmp/first_test
[xrootd] Total 0.76 MB  [====================] 100.00 % [inf MB/s]
[email protected] # ls -l /tmp/first_test
-rw-r--r-- 1 xrootd xrootd 801512 Apr 11 10:48 /tmp/first_test

Other than for testing, a standalone server is useful when you want to serve files off of a single host with lots of large disks. If your storage capacity is spread out over multiple hosts, you will need to set up an XRootD cluster.

Advanced configuration

An advanced XRootD setup has multiple components; it is important to validate that each additional component that you set up is working before moving on to the next component. We have included validation instructions after each component below.

Creating an XRootD cluster

XRootD cluster

If your storage is spread out over multiple hosts, you will need to set up an XRootD cluster. The cluster uses one "redirector" node as a frontend for user accesses, and multiple data nodes that have the data that users request. Two daemons will run on each node:

xrootd
The eXtended Root Daemon controls file access and storage.

cmsd
The Cluster Management Services Daemon controls communication between nodes.

Note that for large virtual organizations, a site-level redirector may actually also communicate upwards to a regional or global redirector that handles access to a multi-level hierarchy. This section will only cover handling one level of XRootD hierarchy.

In the instructions below, RDRNODE will refer to the redirector host and DATANODE will refer to the data node host. These should be replaced with the fully-qualified domain name of the host in question.

Modify /etc/xrootd/xrootd-clustered.cfg

You will need to modify the xrootd-clustered.cfg on the redirector node and each data node. The following example should serve as a base configuration for clustering. Further customizations are detailed below.

all.export /tmp stage
set xrdr = RDRNODE
all.manager $(xrdr):3121

if $(xrdr)
  # Lines in this block are only executed on the redirector node
  all.role manager
else
  # Lines in this block are executed on all nodes but the redirector node
  all.role server
  cms.space min 2g 5g
fi

You will need to customize the following lines:

Configuration Line Changes Needed
all.export /tmp stage Change /tmp to the directory to allow XRootD access to
set xrdr=RDRNODE Change to the hostname of the redirector
cms.space min 2g 5g Reserve this amount of free space on the node. For this example, if space falls below 2GB, xrootd will not store further files on this node until space climbs above 5GB. You can use k, m, g, or t to indicate kilobyte, megabytes, gigabytes, or terabytes, respectively.

Further information can be found at http://xrootd.slac.stanford.edu/doc

Verifying the clustered config

Start both xrootd and cmsd on all nodes according to the instructions in the managing services section.

Verify that you can copy a file such as /bin/sh to /tmp on the server data via the redirector:

[email protected] # xrdcp /bin/sh  root://RDRNODE:1094///tmp/second_test
[xrootd] Total 0.76 MB  [====================] 100.00 % [inf MB/s]

Check that the /tmp/second_test is located on data server DATANODE.

(Optional) Adding Simple Server Inventory to your cluster

The Simple Server Inventory (SSI) provide means to have an inventory for each data server. SSI requires:

  • A second instance of the xrootd daemon on the redirector
  • A "composite name space daemon" (XrdCnsd) on each data server; this daemon handles the inventory

As an example, we will set up a two-node XRootD cluster with SSI.

Host A is a redirector node that is running the following daemons:

  1. xrootd redirector
  2. cmsd
  3. xrootd - second instance that required for SSI

Host B is a data server that is running the following daemons:

  1. xrootd data server
  2. cmsd
  3. XrdCnsd - started automatically by xrootd

We will need to create a directory on the redirector node for Inventory files.

[email protected] # mkdir -p /data/inventory
[email protected] # chown xrootd:xrootd /data/inventory

On the data server (host B) let's use a storage cache that will be at a different location from /tmp.

[email protected] # mkdir -p  /local/xrootd
[email protected] # chown xrootd:xrootd /local/xrootd

We will be running two instances of XRootD on hostA. Modify /etc/xrootd/xrootd-clustered.cfg to give the two instances different behavior, as such:

all.export /data/xrootdfs
set xrdr=hostA
all.manager $(xrdr):3121
if $(xrdr) && named cns
      all.export /data/inventory
      xrd.port 1095
else if $(xrdr)
      all.role manager
      xrd.port 1094
else
      all.role server
      oss.localroot /local/xrootd
      ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
      #add cms.space if you have less the 11GB
      # cms.space options http://xrootd.slac.stanford.edu/doc/dev/cms_config.htm
      cms.space min 2g 5g
fi

The value of oss.localroot will be prepended to any file access.
E.g. accessing root://RDRNODE:1094//data/xrootdfs/test1 will actually go to /local/xrootd/data/xrootdfs/test1.

Starting a second instance of XRootD on EL 6

The procedure for starting a second instance differs between EL 6 and EL 7. This section is the procedure for EL 6.

Now, we have to change /etc/sysconfig/xrootd on the redirector node (hostA) to run multiple instances of XRootD. The second instance of XRootD will be named "cns" and will be used for SSI.

XROOTD_USER=xrootd 
XROOTD_GROUP=xrootd 
XROOTD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg"
XROOTD_CNS_OPTIONS="-k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg" 
CMSD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg" 
FRMD_DEFAULT_OPTIONS="-k 7 -l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg" 
XROOTD_INSTANCES="default cns" 
CMSD_INSTANCES="default" 
FRMD_INSTANCES="default" 

Now, we can start XRootD cluster executing the following commands. On redirector you will see:

[email protected] # service xrootd start 
Starting xrootd (xrootd, default): [ OK ] 
Starting xrootd (xrootd, cns): [ OK ] 
[email protected] # service cmsd start 
Starting xrootd (cmsd, default): [ OK ] 

On redirector node you should see two instances of xrootd running:

[email protected] # ps auxww|grep xrootd
xrootd 29036 0.0 0.0 44008 3172 ? Sl Apr11 0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd 29108 0.0 0.0 43868 3016 ? Sl Apr11 0:00 /usr/bin/xrootd -k 7 -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-cns.pid -n cns
xrootd 29196 0.0 0.0 51420 3692 ? Sl Apr11 0:00 /usr/bin/cmsd -k 7 -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default

Warning the log file for second named instance of xrootd with be placed in /var/log/xrootd/cns/xrootd.log

On data server node you should that XrdCnsd process has been started:

[email protected] # ps auxww|grep xrootd
xrootd 19156 0.0 0.0 48096 3256 ? Sl 07:37 0:00 /usr/bin/cmsd -l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/cmsd-default.pid -n default
xrootd 19880 0.0 0.0 46124 2916 ? Sl 08:33 0:00 /usr/bin/xrootd -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -b -s /var/run/xrootd/xrootd-default.pid -n default
xrootd 19894 0.0 0.1 71164 6960 ? Sl 08:33 0:00 /usr/bin/XrdCnsd -d -D 2 -i 90 -b fermicloud053.fnal.gov:1095:/data/inventory

Starting a second instance of XRootD on EL 7

The procedure for starting a second instance differs between EL 6 and EL 7. This section is the procedure for EL 7.

  1. Create a symlink pointing to /etc/xrootd/xrootd-clustered.cfg at /etc/xrootd/xrootd-cns.cfg:
[email protected] # ln -s /etc/xrootd/xrootd-clustered.cfg /etc/xrootd/xrootd-cns.cfg
  1. Start an instance of the xrootd service named cns using the syntax in the managing services section:

Testing an XRootD cluster with SSI

  1. Copy file to redirector node specifying storage path (/data/xrootdfs instead of /tmp):
[email protected] # xrdcp /bin/sh root://RDRNODE:1094//data/xrootdfs/test1 
[xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s] 
  1. To verify that SSI is working execute cns_ssi command on the redirector node:
[email protected] # cns_ssi list /data/inventory 
fermicloud054.fnal.gov incomplete inventory as of Mon Apr 11 17:28:11 2011 
[email protected] # cns_ssi updt /data/inventory 
cns_ssi: fermicloud054.fnal.gov inventory with 1 directory and 1 file updated with 0 errors. 
[email protected] # cns_ssi list /data/inventory 
fermicloud054.fnal.gov complete inventory as of Tue Apr 12 07:38:29 2011 /data/xrootdfs/test1 

Note: In this example, fermicloud53.fnal.gov is a redirector node and fermicloud054.fnal.gov is a data node.

(Optional) Enabling a FUSE mount

XRootD storage can be mounted as a standard POSIX filesystem via FUSE, providing users with a more familiar interface..

Modify /etc/fstab by adding the following entries:

....
xrootdfs                /mnt/xrootd              fuse    rdr=xroot://redirector1.domain.com:1094//path/,uid=xrootd 0 0

Replace /mnt/xrootd with the path that you would like to access with. This should also match the GridFTP settings for the XROOTD_VMP local path. Create /mnt/xrootd directory. Make sure the xrootd user exists on the system. Once you are finished, you can mount it:

mount /mnt/xrootd

You should now be able to run UNIX commands such as ls /mnt/xrootd to see the contents of the XRootD server.

(Optional) Authorization

There are several authorization options in XRootD available through the security plugins. In this document, we will cover two options for security:

Note: On the data nodes, the files will actually be owned by unix user xrootd (or other daemon user), not as the user authenticated to, under most circumstances. XRootD will verify the permissions and authorization based on the user that the security plugin authenticates you to (for instance, your unix user for option 1 or your gums id for option 2), but, internally, the data node files will be owned by the xrootd user.

Authorization file

In order to add security to your cluster you will need to add "auth_file" on the your data server node. Create /etc/xrootd/auth_file :

# This means that all the users have read access to the datasets 
u * /data/xrootdfs lr

# This means that all the users have full access to their private dirs 
u = /data/xrootdfs/@=/ a

# This means that this privileged user can do everything 
# You need at least one user like that, in order to create the 
# private dir for each user willing to store his data in the facility 
u xrootd /data/xrootdfs a 

Here we assume that your storage path is /data/xrootdfs (same as in the previous example).

Change file ownership (if you have created file as root):

[email protected] # chown xrootd:xrootd /etc/xrootd/auth_file

This file is a flat file of the following form:

idtype id path privs

Some examples of each option. For more details or examples on how to use templated user options, see XRootd Authorization Database File.

idtype Type of id. Use u for username, g for group, etc.
id Username (or groupname). Use * for all users or = for user-specific capabilities, like home directories
path The path prefix to be used for matching purposes
privs Letter list of privileges: a - all ; l - lookup ; d - delete ; n - rename ; i - insert ; r - read ; k - lock (not used) ; w - write

Security option 1: adding simple (Unix) security

The first step in adding simple Unix security to validate based on username is to create the auth_file as in the previous section.

The next step is to modify /etc/xrootd/xrootd-clustered.cfg on both nodes:

all.export /data/xrootdfs 
set xrdr=hostA 
all.manager $(xrdr):3121 
if $(xrdr) && named cns 
    all.export /data/inventory 
    xrd.port 1095 
else if $(xrdr) 
    all.role manager 
    xrd.port 1094 
else 
    all.role server 
    oss.localroot /local/xrootd 
    ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory 
    cms.space min 2g 5g 

     # ENABLE_SECURITY_BEGIN 
    xrootd.seclib /usr/lib64/libXrdSec.so 
    # this specify that we use the 'unix' authentication module, additional one can be specified. 
    sec.protocol /usr/lib64 unix 
    # this is the authorization file 
    acc.authdb /etc/xrootd/auth_file 
    ofs.authorize 
    # ENABLE_SECURITY_END  
fi 

Note that, to access users directories, you will have to create them in oss.localroot (For instance, /local/xrootd/data/xrootdfs/username) and make sure they are writable by xrootd user (or the daemon user, if you have changed it). Files in localroot on the data nodes are normally owned by xrootd not by the authenticated username.

After making all the changes, please, restart XRootD and cmsd daemons on all nodes.

Testing an XRootD cluster with simple security enabled

  1. Login on redirector node as root
  2. Check that user "root" still can read files:
[email protected] # xrdcp root://RDRNODE:1094//data/xrootdfs/test1 /tmp/b 
[xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s]
  1. Check that user "root" can not write files under /data/xrootdfs:
[email protected] # xrdcp /tmp/b root://RDRNODE:1094//data/xrootdfs/test2 
Last server error 3010 ('Unable to create /data/xrootdfs/test2; Permission denied') 
Error accessing path/file for root://localhost:1094//data/xrootdfs/test3 

or you may get this error:

[email protected] # xrdcp /tmp/b root://RDRNODE:1094//data/xrootdfs/test2 
Last server error 3011 ('No servers are available to write the file.') 
Error accessing path/file for root://localhost:1094//data/xrootdfs/test2
  1. Check that user can copy/retrieve files to/from /data/xrootdfs/~/...
[email protected] # su - user 
-bash-3.2$ xrdcp /tmp/a root://RDRNODE:1094//data/xrootdfs/user/test1
[xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s] 
-bash-3.2$ xrdcp root://localhost:1094//data/xrootdfs/user/test1 /tmp/c 
[xrootd] Total 0.00 MB [================] 100.00 % [inf MB/s]

Security option 2: Shared keys

If you want to enable security for access to XRootD via xrootdfs you will need to modify XRootD configuration and perform several steps to make xrootdfs secured.

  1. On the XRootD redirector node, execute the following command:

    [email protected] # xrdsssadmin -k <my_key_name> -u anybody -g usrgroup add <keyfile>
    

    eg:

    [email protected] # xrdsssadmin -k top_secret -u anybody -g usrgroup add /etc/xrootd/xrootd.key
    
  2. Set ownership

    [email protected] # chown xrootd.xrootd /etc/xrootd/xrootd.key
    
  3. On the node where xrootdfs is installed modify /etc/fstab add security information:

    [email protected] # xrootdfs /mnt/xrootd  fuse rdr=xroot://redirector1.domain.com:1094//path/redirector1,uid=xrootd,sss=keyfile 0 0
    
  4. On all XRootD data servers and redirector nodes, modify XRootD configuration (/etc/xrootd/xrootd-clustered.cfg) by adding the following segment:

    # ENABLE_SECURITY_BEGIN
       xrootd.seclib /usr/lib64/libXrdSec.so
       #the line below should be before "sec.protocol ... unix"
       sec.protocol /usr/lib64 sss -s keyfile 
       sec.protocol /usr/lib64 unix
       # this specify that we use the 'unix' authentication module, additional one can be specified.
       # this is the authorization file
       acc.authdb /etc/xrootd/auth_file
       ofs.authorize
       # ENABLE_SECURITY_END
    
  5. On all XRootD data server nodes, edit /etc/xrootd/auth_file to add authorized users of the form u username /directoryname lr where "lr" is the permission set.

  6. Copy keyfile from redirector node to every data server node and the xrootdfs node. Make sure that this file is owned by the xrootd user.

  7. Restart XRootD cluster by restarting all the relevant daemons.

  8. On xroodfs node execute mount:

    [email protected] # mount /mnt/xrootd
    
  9. Verify that you can access the mount point (df,ls) and can not write into unauthorized path, e.g:

    [email protected] # cp /bin/sh /mnt/xrootd/tlevshin/test1 cp:
    cannot create regular file `/mnt/xrootd/tlevshin/test1': Permission denied
    

    Login as yourself and try:

    [email protected] # su - tlevshin
    [email protected] $ cp /bin/sh /mnt/xrootd/tlevshin/test1
    

Security option 3: xrootd-lcmaps authorization

The xrootd-lcmaps security plugin uses the lcmaps library and the LCMAPS VOMS plugin to authenticate and authorize users based on X509 certificates and VOMS attributes. Perform the following instructions on all data nodes:

  1. Install CA certificates and manage CRLs

  2. Follow the instructions for requesting a service certificate, using xrootd for both the <SERVICE> and <OWNER>, resulting in a certificate and key in /etc/grid-security/xrootd/xrootdcert.pem and /etc/grid-security/xrootd/xrootdkey.pem, respectively.

  3. Install and configure the LCMAPS VOMS plugin

  4. Install xrootd-lcmaps and necessary configuration:

    [email protected] # yum install xrootd-lcmaps vo-client
    
  5. Append the following to /etc/lcmaps.db:

    xrootd_policy:
    verifyproxynokey -> banfile
    banfile -> banvomsfile | bad
    banvomsfile -> gridmapfile | bad
    gridmapfile -> good | vomsmapfile
    vomsmapfile -> good | defaultmapfile
    defaultmapfile -> good | bad
    
  6. Modify /etc/osg/config.d/10-misc.ini so that future invocations don't overwrite your /etc/lcmaps.db changes:

    edit_lcmaps_db = False
    
  7. Configure access rights for mapped users by creating and modifying the XRootD authorization file

  8. Modify your XRootD configuration:

    1. Choose the configuration file to edit based on the following table:

      If you are running XRootD in... Then modify the following file...
      Standalone mode /etc/xrootd/xrootd-standalone.cfg
      Clustered mode /etc/xrootd/xrootd-clustered.cfg
    2. Add the following lines to the configuration that you chose above:

      xrootd.seclib /usr/lib64/libXrdSec-4.so
      sec.protocol /usr/lib64 gsi -certdir:/etc/grid-security/certificates \
                          -cert:/etc/grid-security/xrootd/xrootdcert.pem \
                          -key:/etc/grid-security/xrootd/xrootdkey.pem -crl:1 \
                          -authzfun:libXrdLcmaps.so -authzfunparms:--loglevel,0 \
                          -gmapopt:10 -gmapto:0
      acc.authdb /etc/xrootd/auth_file
      ofs.authorize
      

      If you are running XRootD in clustered mode, the above will also need to be added to all data nodes in the section relevant to the data node server. For instance, in the above example, the security configuration should be placed after the all.role server line:

      all.export /data/xrootdfs
      set xrdr=hostA
      all.manager $(xrdr):3121
      if $(xrdr) && named cns
          all.export /data/inventory
          xrd.port 1095
      else if $(xrdr)
          all.role manager
          xrd.port 1094
      else
          all.role server
          oss.localroot /local/xrootd
          ofs.notify closew create mkdir mv rm rmdir trunc | /usr/bin/XrdCnsd -d -D 2 -i 90 -b $(xrdr):1095:/data/inventory
          cms.space min 2g 5g
      
           # ENABLE_SECURITY_BEGIN
          xrootd.seclib /usr/lib64/libXrdSec-4.so
          sec.protocol /usr/lib64 gsi -certdir:/etc/grid-security/certificates \
                              -cert:/etc/grid-security/xrootd/xrootdcert.pem \
                              -key:/etc/grid-security/xrootd/xrootdkey.pem -crl:1 \
                              -authzfun:libXrdLcmaps.so -authzfunparms:--loglevel,0 \
                              -gmapopt:10 -gmapto:0
          acc.authdb /etc/xrootd/auth_file
          ofs.authorize
          # ENABLE_SECURITY_END 
      fi
      
  9. Restart the relevant services

To verify the LCMAPS security, run the following commands from a machine with your user certificate/key pair, xrootd-client, and voms-clients-cpp installed:

  1. Destroy any pre-existing proxies and attempt a copy to <DESTINATION PATH> on the <XROOTD HOST> to verify failure:

    [email protected] $ voms-proxy-destroy
    [email protected] $ xrdcp /bin/bash root://<XROOTD HOST>/<DESTINATION PATH>
    180213 13:56:49 396570 cryptossl_X509CreateProxy: EEC certificate has expired
    [0B/0B][100%][==================================================][0B/s]
    Run: [FATAL] Auth failed
    
  2. On the XRootD host, add your DN to /etc/grid-security/grid-mapfile

  3. Generate your proxy and verify that you can successfully transfer files:

    [email protected] $ voms-proxy-init
    [email protected] $ xrdcp  /bin/sh root://<XROOTD HOST>/<DESTINATION PATH>
    [938.1kB/938.1kB][100%][==================================================][938.1kB/s]
    

    If your transfer does not succeed, run the previous command with --debug 2 for more information.

(Optional) Adding CMS TFC support to XRootD (CMS sites only)

For CMS users, there is a package available to integrate rule-based name lookup using a storage.xml file. If you are not setting up a CMS site, you can skip this section.

yum install --enablerepo=osg-contrib xrootd-cmstfc

You will need to add your storage.xml to /etc/xrootd/storage.xml and then add the following line to your XRootD configuration:

# Integrate with CMS TFC, placed in /etc/xrootd/storage.xml
oss.namelib /usr/lib64/libXrdCmsTfc.so file:/etc/xrootd/storage.xml%ORANGE%?protocol=hadoop

Add the orange text only if you are running hadoop (see below).

See the CMS TWiki for more information:

(Optional) Adding Hadoop support to XRootD

HDFS-based sites should utilize the xrootd-hdfs plugin to allow XRootD to access their storage.

[email protected] # yum install xrootd-hdfs

You will then need to add the following lines to your /etc/xrootd/xrootd-clustered.cfg:

xrootd.fslib /usr/lib64/libXrdOfs.so
ofs.osslib /usr/lib64/libXrdHdfs.so

For more information, see the HDFS installation documents.

(Optional) Adding File Residency Manager (FRM) to an XRootd cluster

If you have a multi-tiered storage system (e.g. some data is stored on SSDs and some on disks or tapes), then install the File Residency Manager (FRM), so you can move data between tiers more easily. If you do not have a multi-tiered storage system, then you do not need FRM and you can skip this section.

The FRM deals with two major mechanisms:

  • local disk
  • remote servers

The description of fully functional multiple XRootD clusters is beyond the scope of this document. In order to have this fully functional system you will need a global redirector and at least one remote XRootD cluster from where files could be moved to the local cluster.

Below are the modifications you should make in order to enable FRM on your local cluster:

  1. Make sure that FRM is enabled in /etc/sysconfig/xrootd on your data sever:
ROOTD_USER=xrootd 
XROOTD_GROUP=xrootd 
XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg" 
CMSD_DEFAULT_OPTIONS="-l /var/log/xrootd/cmsd.log -c /etc/xrootd/xrootd-clustered.cfg" 
FRMD_DEFAULT_OPTIONS="-l /var/log/xrootd/frmd.log -c /etc/xrootd/xrootd-clustered.cfg" 
XROOTD_INSTANCES="default" 
CMSD_INSTANCES="default" 
FRMD_INSTANCES="default"
  1. Modify /etc/xrootd/xrootd-clustered.cfg on both nodes to specify options for frm_xfrd (File Transfer Daemon) and frm_purged (File Purging Daemon). For more information, you can visit the FRM Documentation
  2. Start frm daemons on data server:
[email protected] # service frm_xfrd start
[email protected] # service frm_purged start

(Optional) Installing a GridFTP Server

The Globus GridFTP server can be installed alongside an XRootD storage element to provide GridFTP-based access to the storage.

See Also

OSG has extensive documentation on setting up a GridFTP server; this section is an abbreviated version documenting the special steps needed for XRootD integration. You may also find the following useful:

Prior to following this installation guide, verify the host certificates and networking is configured correctly as in the basic GridFTP install.

Installation

GridFTP support for XRootD-based storage is provided by the osg-gridftp-xrootd meta-package:

[email protected] # yum install osg-gridftp-xrootd

Configuration

For information on how to configure authentication for your GridFTP installation, please refer to the configuring authentication section of the GridFTP guide.

Edit /etc/sysconfig/xrootd-dsi to set XROOTD_VMP to use your XRootD redirector.

export XROOTD_VMP="redirector:1094:/local_path=/remote_path"

Warning

The syntax of XROOTD_VMP is tricky; make sure to use the following guidance:

  • Redirector: The hostname and domain of the local XRootD redirector server.
  • local_path: The path exported by the GridFTP server.
  • remote_path: The XRootD path that will be mounted at local_path.

When xrootd-dsi is enabled, GridFTP configuration changes should go into /etc/xrootd-dsi/gridftp-xrootd.conf, not /etc/gridftp.conf.
Sites should review any customizations made in the latter and copy them as necessary.

You can use the FUSE mount in order to test POSIX access to xrootd in the GridFTP server. You should be able to run Unix commands such as ls /mnt/xrootd and see the contents of the XRootD server.

For log / config file locations and system services to run, see the basic GridFTP install.

Using XRootD

Managing XRootD services

Start services on the redirector node before starting any services on the data nodes. If you installed only XRootD itself, you will only need to start the xrootd service. However, if you installed cluster management services, you will need to start cmsd as well.

The instructions for starting and stopping an XRootD service depend on whether the service is installed on an EL 6 or EL 7 machine, and whether you are using a standalone or clustered configuration.

On EL 6, which config to use is set in the file /etc/sysconfig/xrootd. For example, to have xrootd use the clustered config, you would have a line such as this:

XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-clustered.cfg -k fifo"

To use the standalone config instead, you would use:

XROOTD_DEFAULT_OPTIONS="-l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-standalone.cfg -k fifo"

On EL 7, which config to use is determined by the service name given to systemctl. For example, to have xrootd use the clustered config, you would start up xrootd with this line:

[email protected] # systemctl start [email protected]clustered

To use the standalone config instead, you would use:

[email protected] # systemctl start [email protected]standalone

The services are:

Service EL 6 service name EL 7 service name
XRootD (standalone config) xrootd [email protected]
XRootD (clustered config) xrootd [email protected]
CMSD (clustered config) cmsd [email protected]

As a reminder, here are common service commands (all run as root):

To … On EL 6, run the command… On EL 7, run the command…
Start a service service SERVICE-NAME start systemctl start SERVICE-NAME
Stop a service service SERVICE-NAME stop systemctl start SERVICE-NAME
Enable a service to start during boot chkconfig SERVICE-NAME on systemctl enable SERVICE-NAME
Disable a service from starting during boot chkconfig SERVICE-NAME off systemctl disable SERVICE-NAME

Getting Help

To get assistance. please use the Help Procedure page.

Reference

File locations

Service/Process Configuration File Description
xrootd /etc/xrootd/xrootd-clustered.cfg Main clustered mode XRootD configuration
/etc/xrootd/auth_file Authorized users file
Service/Process Log File Description
xrootd /var/log/xrootd/xrootd.log XRootD server daemon log
cmsd /var/log/xrootd/cmsd.log Cluster management log
cns /var/log/xrootd/cns/xrootd.log Server inventory (composite name space) log
frm_xfrd, frm_purged /var/log/xrootd/frmd.log File Residency Manager log