Newer
Older
The Advanced Resource Connector (ARC) middleware, introduced by the
NorduGrid Collaboration (www.nordugrid.org), is an open source software
solution enabling production quality computational and data grids.
Since its first release (May 2002) the middleware has been deployed and
been used in production environments. Emphasis is put on scalability,
stability, reliability and performance of the middleware. A growing
number of grid projects, like Swegrid, NDGF and Swiss Bio Grid have
chosen ARC as their middleware.

Oxana Smirnova
committed
This release (April 2011) is the first release of the new generation of
ARC (previews were known as ARC1 or Nox). It is based on a service
container - the Hosting Environment Daemon (HED) - and different grid
capabilities are implemented as Web Services residing in HED. ARC Compute
Element is implemented as an OGSA BES compliant execution service called
A-REX (ARC Resource-coupled EXecution service), which has same
functionality as Grid Manager before. Another service residing in HED
and included in this release is a simple echo service for test purposes,
but the set of services is rapidly growing.
The core part of middleware is written in C/C++.
Building the software from source or installing a precompiled binary
requires different external packages, furthermore the client and server
packages have different dependencies too. Below a list of the explicit
requirements is shown:
Mandatory dependencies
----------------------
Build:
o GNU make, autotools (autoconf>=2.56) (automake>=1.8)
o CVS
o m4
o GNU gettext
o C++ compiler and library
o libtool
o pkg-config
o doxygen
Build & runtime:
o e2fsprogs
o gthread-2.0 version 2.4.7 or later
o glibmm-2.4 version 2.4.7 or later
o libxml-2.0 version 2.4.0 or later
o openssl version 0.9.7a or later
If you are using LDAP based infosys:
o bdii version 5 or later ( from repositories or http://download.nordugrid.org/packages/bdii/releases/ )
o glue-schema ( from repositories or http://svnweb.cern.ch/guest/gridinfo/glue-schema )
Optional dependencies
---------------------
Build:
o CppUnit for unit testing
Martin Skou Andersen
committed
o Grid Packaging Tools (GPT) (compute client)
o swig version 1.3.28 or later (bindings)
o python 2.4 or higher (bindings, APEL publisher by Jura, ACIX)
o java sdk 1.4 or later (bindings)
Martin Skou Andersen
committed
o globus-common 4 (compute client)
o globus-gssapi-gsi 4 (compute client)
o globus-ftp-client 4 (compute client)
o globus-ftp-control 4 (compute client)
o globus-io 4 (compute client)
o globus-openssl (compute client)
o Berkeley DB C++ interface (Delegation)
o xmlsec1 1.2.4 or higher (Security)
Runtime dependencies:
o Perl, libxml-simple-perl, perl-Digest-SHA1 (A-rex)
o Perl, perl-SOAP-Lite, perl-Crypt-OpenSSL-X509 (nordugridmap)
o GNU time (A-rex)
o VOMS (LFC DMC)
o pyOpenSSL, python-twisted-web, python-twisted-core, python-simplejson,
(Python 2.4 only) python-hashlib (ACIX)
Please note that depending on operating system distribution to build
ARC you may need to install development versions of mentioned packages.
Getting the software
====================
The middleware is free to deploy anywhere by anybody. Pre-built binary
releases for a dozen of Linux platforms can be downloaded from the
NorduGrid software repository at download.nordugrid.org.

Oxana Smirnova
committed
The software is released under the Apache 2.0 License (see the LICENSE
file).
The NorduGrid repository hosts the source code, and provides most of
the required external software which are not part of a standard Linux
You can get the latest source code for ARC from the Subversion
repository. See http://svn.nordugrid.org for more details.
There are also nightly code snapshots available at
http://download.nordugrid.org/software/nordugrid-arc/experimental/ .
Choose latest version available, go in to the src directory and
download the tarball.
Building & Installation
=======================
The recommended way to install ARC is from repositories. If you want
to build it yourself and downloaded the tarball, unpack it and cd into
the created directory.
tar -zxvf nordugrid-arc-1.0.0.tar.gz
cd nordugrid-arc-1.0.0
If you obtained the code from the Subversion repository, use the
'tags/1.0.0' directory.
svn co http://svn.nordugrid.org/repos/nordugrid/arc1/tags/1.0.0 nordugrid-arc
cd nordugrid-arc
Now configure the obtained code with
./configure --prefix=PLACE_TO_INSTALL_ARC
Choose the installation prefix wisely and according to the
requirements of your OS and personal preferences. ARC should function
properly from any location. By default installation goes into /usr if
you omit the '--prefix' option. If you instal into another directory
than /usr you may need to set up the environment variable after
installation:
export ARC_LOCATION=PLACE_TO_INSTALL_ARC
On some systems 'autogen.sh' may produce few warnings. Ignore them as
long as 'configure' passes without errors. But in case of problems
during configure or compilation, collect them and present while
reporting problems.
Anders Waananen
committed
If the previous commands finish without errors, do
touch src/doxygen/*.pdf
in order to get around an issue with timestamps and then compile and
install ARC:
Markus Norden
committed
make
make install
Anders Waananen
committed
If you have already installed ARC libraries in the system default
location such as /usr/lib you may need to use the following
installation command instead in order to override installed pkgconfig
files and/or libtool archives which contains -L/usr/lib:
make LDFLAGS="-L<PLACE_TO_INSTALL_ARC>/lib" install

Oxana Smirnova
committed
On some systems you may need to use gmake instead of make.
Depending on chosen installation location you may need to run the last
command from root account. That should install the following components:
sbin/arched - server executable
bin/ - user tools and command line clients
lib/ - common libraries used by clients, server and plugins
lib/arc/ - plugins implementing Message Chain, Service and Security components
include/arc/ - C++ headers for application development
libexec/ - additional modules used by ARC services - currently only A-REX

Oxana Smirnova
committed
share/arc - configuration examples, templates etc
share/doc/nordugrid-arc-* - documentation
share/locale - internationalization files - curently very limited support
share/man - manual pages for various utilities
X509 Certificates
=================
Most of ARC planned and existing services use HTTPS as transport protocol
so they require proper setup of X509 security infrastructure. Minimal
* Host certificate aka public key in PEM format
* Corresponding private key
* Certificate of the Certification Authority (CA) which was used to sign the host certificate
* Certificates of CAs of clients which are going to send requests to services. Unless
of course clients use the same CA as the server.
More information about X509 certificates and their usage in Grid environment
can be found on http://www.nordugrid.org/documents/certificate_howto.html
and http://www.nordugrid.org/documents/ng-server-install.html#security
For testing purposes you can use pre-generated certificates and
http://svn.nordugrid.org/trac/nordugrid/browser/doc/trunk/tech_doc/sec/TestCA
Alternatively You may choose to use KnowARC Instant CA service available at
https://vls.grid.upjs.sk/CA/instantCA . It is especially useful if to test
installations consisting on multiple hosts.
Please remember that it is not safe to use such instant keys in publicly
accessible installations of ARC. Make sure that even the generate CA
certificate is removed before making your services available to the
outside world.
You can put host certificates and private keys anywhere. Common locations
for servers running from root account are /etc/grid-security/hostcert.pem
and /etc/grid-security/hostkey.pem, respectively. The content of the
private key must not be encrypted and or protected by a password sinc
a service has no way to ask person about password. So make sure it is
properly protected by means of file system.
It is possible to configure ARC server to accept either a single CA certificate
Aleksandr Konstantinov
committed
or multiple CA certificates located in the specified directory. The latter
option is recommended. The common location is /etc/grid-security/certificates/ .
In that case names of certificate files have to follow hash values of
the certificates. These are obtainable by running the command
openssl x509 -hash -noout -in path_to_certificate
The corresponding file name for the certificate should be <hash_value>.0 .
The value for the pre-generated CA certificate is 4457e417.
1. Configuration for mutual authentication
Please make sure the chosen location of certificates is correctly
configured in the service configuration file. The configuration for the
certificate for TLS MCC should look like this:
<KeyPath>/etc/grid-security/hostkey.pem</KeyPath>
<CertificatePath>/etc/grid-security/hostcert.pem</CertificatePath>
<CACertificatesDir>/etc/grid-security/certificates</CACertificatesDir>
<CACertificatePath>/etc/grid-security/ca.pem</CACertificatePath>
The key file has to be without passphrase for the server side. You can also
configure a proxy certificate instead of the normal certificate (See part:
Proxy certificate Generation & Usage).
The same requirements are valid for the client tools of ARC. You may use
the pregenerated user certificate and key located at the same
place. Locations of the credentials are provided to the client tools
2.Configuration for no client-authentication
You can also configure only server-authentication instead of mutual
authentication. In this case, the server will not send a client certificate
request to the client, so the client will not send a certificate to the
server, which means only the server's certificate is checked.
For the server side, the configuration for the certificate for TLS MCC
should look like this:
<KeyPath>/etc/grid-security/hostkey.pem</KeyPath>
<CertificatePath>/etc/grid-security/hostcert.pem</CertificatePath>
<ClientAuthn>false</ClientAuthn>
Note: here either <CACertificatePath/> or <<CACertificatesDir>/> are not needed,
because the client's certificate will not checked by server; but <ClientAuthn/>
is required here, which explicitly specify that client's certificate will not
be checked, the default value of <ClientAuthn/> is "true" and it does not need
to be explicitly specified.
For the client side, the configuration for the certificate for TLS MCC
should look like this:
<CACertificatesDir>/etc/grid-security/certificates</CACertificatesDir>
or
<CACertificatePath>/etc/grid-security/ca.pem</CACertificatePath>
Note: here only the ca information needs to be specified.
The set of pre-generated keys and certificates also includes a user
certificate in PKCS12 format which you can import into your browser
for accessing ARC services capable of producing HTML output.
ARC comes with the utility arcproxy to generate proxy credentials
from a certificate/private key pair. It provides only basic functionality
and is meant for testing purposes only.
IMPORTANT: If during configuration stage You see a message "OpenSSL
contains no support for proxy credentials" that means You won't be
able to use proxy credentials generated by utilities like grid-proxy-init,
voms-proxy-init or arcproxy. Because of that all user private keys has
to be kept unencrypted.
Proxy certificate Generation & Usage
====================================
As metioned above, ARC comes with utility proxy generation utility
arcproxy appears in ARC_LOCATION/bin. The usage of arcproxy is like:
ARC_LOCATION/bin/arcproxy -P proxy.pem -C cert.pem -K key.pem
-c validityStart=2008-05-29T10:20:30Z
-c validityEnd=2008-06-29T10:20:30Z
-c proxyPolicyFile=delegation_policy.xml
By using argument "-c", some constraints can be specified for proxy
certificate.
Currently, the life time can be specified by using
"-c validityStart=..." and "-c validityEnd=...", or "-c validityStart=..."
and "-c validityPeriod=...";
and the proxy policy can be specified by using "-c proxyPolicyFile=..."
Note: If the "validityStart" has not been set, the current time will
be used as start time for proxy.
If both "validityEnd" and "validityPeriod" have not been set, the default
validity period will be set to 12 hours.
If the "validityStart" is set, it should not be before current time.
The time unit for "validityPeriod" is second, e.g. "-c validityPeriod=86400"
If proxy certificate is used, in the configuration file for service
side or client side, the configuration for the certificate for TLS MCC
should look like this:
<KeyPath>./proxy.pem</KeyPath>
<CertificatePath>./proxy.pem</CertificatePath>
<CACertificatePath>./ca.pem</CACertificatePath>
Since normally a proxy certificate file includes the proxy certificate
and private key corresponding to the proxy certificate, <KeyPath/> and
<CertificatePath/> are configured the same.
Alternatively, you can directly configure <ProxyPath/> instead of <KeyPath/>
and <CertificatePath/>:
<ProxyPath>./proxy.pem</ProxyPath>
<CACertificatePath>./ca.pem</CACertificatePath>
Proxy policy can be spefified as constraint. Proxy policy is for constraining
identity delegation. Currently, the supported policy is Arc specific policy.
Proxy policy is inserted into proxy certificate's "proxy cert info"
extenstion in RFC3820's policy language "NID_id_ppl_anyLanguage".
ARC Server Setup & Configuration
================================
The configuration of the ARC server can either be specified in
arc.conf or in an XML file, the location of which is specified as a
command line argument with the -c option of 'arched' daemon. Examples
of configuration files with comments describing various elements are
available in directory share/doc/arc of the ARC installation.
Markus Norden
committed
The echo service is "atomic" and has no additional dependencies other
than what is provided by the Hosting Environment Daemon (HED). An example
of an echo service configuration can be found in share/doc/arc/echo.xml.
Markus Norden
committed
Markus Norden
committed
The Echo Client
===============
The configuration of the ARC echo client is specified in an XML
file. The location of the configuration file is specified by the
environment variable ARC_ECHO_CONFIG. If there is no such environment
variable, the configuration file is assumed to be echo_client.xml in
the current working directory. An example configuration file can be
To use the echo client, type
where <message> is the message which the echo service will return.
The A-REX Service
=================
ARC comes with OGSA BES compliant Grid job management service called A-REX.
To deploy A-REX use example configuration files available in share/doc/arc :
* arex.xml - configuration for arched server. Read comments inside this file
and edit it to fit your installation. This file defines the WS interface of A-REX.
* arc-arex.conf - legacy configuration for the Grid Manager part of A-REX. This
file defines how jobs are managed by A-REX locally. Read and edit it. For more
detailed information please read Grid Manager documentation available in SVN
repository
http://svn.nordugrid.org/trac/nordugrid/browser/doc/trunk/tech_doc/a-rex/arex_tech_doc.pdf?format=raw
The Grid Manager runs as part of the A-REX service. There is no need to run any additional
executable. But you still need to setup its infrastructure as long as you are
going to have anything more sophisticated than described in the example configuration.
For more information read the previously mentioned document.
A-REX uses either GridFTP or HTTPS as transport protocol (although You can reconfigure it to use
plain HTTP) so it requires proper setup of X509 security infrastructure. See
above for instructions.
Copy example configuration files to some location and edit them. Make sure all paths
to X509 certificates and Grid Manager configuration are set correctly. Start server
$ARC_LOCATION/sbin/arched -c path_to_edited_arex.xml
Look into log file specified in arex.xml for possible errors. You can safely ignore
messages like "Not a '...' type plugin" and "Unknown element ... - ignoring".
If you compiled ARC with Globus support and you see complaints about "libglobus..."
and that it cannot open a shared object file, try to add "/opt/globus/lib" to your
LD_LIBRARY_PATH:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/globus/lib
=================================
Now you may use the command line utility 'arcinfo' to obtain a service description.
Martin Skou Andersen
committed
You can do something like
./arcinfo -c ARC1:https://localhost:60000/arex -l
This should produce a description list of the resources A-REX represents. Below
you can see an example of proper output.
---
Cluster: localhost
Health State: ok
Location information:
Domain information:
Service information:
Service Name: MINIMAL Computing Element
Service Type: org.nordugrid.execution.arex
Martin Skou Andersen
committed
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
Endpoint information:
URL: https://localhost:60000/arex
Capabilities:
executionmanagement.jobexecution
Technology: webservice
Interface Name: OGSA-BES
Supported Profiles:
WS-I 1.0
HPC-BP
Implementor: NorduGrid
Implementation Name: A-REX
Implementation Version: 0.9
QualityLevel: development
Health State: ok
Serving State: production
Issuer CA: /O=Grid/O=NorduGrid/CN=NorduGrid Certification Authority
Trusted CAs:
/C=BE/O=BELNET/OU=BEGrid/CN=BEGrid CA/emailAddress=gridca@belnet.be
/C=FR/O=CNRS/CN=CNRS2-Projets
/DC=org/DC=ugrid/CN=UGRID CA
/C=BR/O=ICPEDU/O=UFF BrGrid CA/CN=UFF Brazilian Grid Certification Authority
/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Grid - G01
/C=PT/O=LIPCA/CN=LIP Certification Authority
/C=FR/O=CNRS/CN=GRID-FR
/C=FR/O=CNRS/CN=CNRS2
/C=TR/O=TRGrid/CN=TR-Grid CA
/C=NL/O=NIKHEF/CN=NIKHEF medium-security certification auth
/DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1
/DC=ch/DC=cern/CN=CERN Trusted Certification Authority
/C=AU/O=APACGrid/OU=CA/CN=APACGrid/emailAddress=camanager@vpac.org
/C=IE/O=Grid-Ireland/CN=Grid-Ireland Certification Authority
/O=Grid/O=NorduGrid/CN=NorduGrid Certification Authority
/DC=RO/DC=RomanianGRID/O=ROSA/OU=Certification Authority/CN=RomanianGRID CA
/DC=bg/DC=acad/CN=BG.ACAD CA
/C=MX/O=UNAMgrid/OU=UNAM/CN=CA
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid Root CA 2006
/C=CL/O=REUNACA/CN=REUNA Certification Authority
/DC=org/DC=balticgrid/CN=Baltic Grid Certification Authority
/C=IT/O=INFN/CN=INFN CA
/DC=me/DC=ac/DC=MREN/CN=MREN-CA
/C=FR/O=CNRS/CN=CNRS-Projets
/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein User CA Grid - G01
/C=UK/O=eScienceCA/OU=Authority/CN=UK e-Science CA
/C=RS/O=AEGIS/CN=AEGIS-CA
/C=SI/O=SiGNET/CN=SiGNET CA
/C=VE/O=Grid/O=Universidad de Los Andes/OU=CeCalCULA/CN=ULAGrid Certification Authority
/DC=ORG/DC=SEE-GRID/CN=SEE-GRID CA
/C=CH/O=Switch - Teleinformatikdienste fuer Lehre und Forschung/CN=SWITCH Personal CA
/C=RU/O=RDIG/CN=Russian Data-Intensive Grid CA
/C=HU/O=KFKI RMKI CA/CN=KFKI RMKI CA
/C=JP/O=KEK/OU=CRC/CN=KEK GRID Certificate Authority
/DC=EDU/DC=UTEXAS/DC=TACC/O=UT-AUSTIN/CN=TACC Root CA
/C=AT/O=AustrianGrid/OU=Certification Authority/CN=Certificate Issuer
/C=IL/O=IUCC/CN=IUCC/emailAddress=ca@mail.iucc.ac.il
/DC=TW/DC=ORG/DC=NCHC/CN=NCHC CA
/C=KR/O=KISTI/O=GRID/CN=KISTI Grid Certificate Authority
/DC=LV/DC=latgrid/CN=Certification Authority for Latvian Grid
/DC=NET/DC=PRAGMA-GRID/CN=PRAGMA-UCSD CA
/C=CH/O=SwissSign/CN=SwissSign CA (RSA IK May 6 1999 18:00:58)/emailAddress=ca@SwissSign.com
/C=MA/O=MaGrid/CN=MaGrid CA
/C=MK/O=MARGI/CN=MARGI-CA
/C=GR/O=HellasGrid/OU=Certification Authorities/CN=HellasGrid CA 2006
/C=TH/O=NECTEC/OU=GOC/CN=NECTEC GOC CA
/C=PL/O=GRID/CN=Polish Grid CA
/C=UK/O=eScienceRoot/OU=Authority/CN=UK e-Science Root
/DC=cz/DC=cesnet-ca/CN=CESNET CA
/C=TW/O=AS/CN=Academia Sinica Grid Computing Certification Authority Mercury
/DC=es/DC=irisgrid/CN=IRISGridCA
/C=JP/O=AIST/OU=GRID/CN=Certificate Authority
/C=JP/O=National Research Grid Initiative/OU=CGRD/CN=NAREGI CA
/DC=BR/DC=UFF/DC=IC/O=UFF LACGrid CA/CN=UFF Latin American and Caribbean Catch-all Grid CA
/C=CY/O=CyGrid/O=HPCL/CN=CyGridCA
/DC=CN/DC=Grid/CN=Root Certificate Authority at CNIC
/C=AR/O=e-Ciencia/OU=UNLP/L=CeSPI/CN=PKIGrid
/C=CN/O=HEP/CN=gridca-cn/emailAddress=gridca@ihep.ac.cn
/C=CA/O=Grid/CN=Grid Canada Certificate Authority
/CN=SWITCH CA/emailAddress=switch.ca@switch.ch/O=Switch - Teleinformatikdienste fuer Lehre und Forschung/C=CH
/DC=CN/DC=Grid/DC=SDG/CN=Scientific Data Grid CA
/C=HU/O=NIIF/OU=Certificate Authorities/CN=NIIF Root CA
/C=IR/O=IPM/O=IRAN-GRID/CN=IRAN-GRID CA
/C=FR/O=CNRS/CN=CNRS
/C=CH/O=Switch - Teleinformatikdienste fuer Lehre und Forschung/CN=SWITCHgrid Root CA
/C=AM/O=ArmeSFo/CN=ArmeSFo CA
/C=FR/O=CNRS/CN=GRID2-FR
/DC=net/DC=ES/O=ESnet/OU=Certificate Authorities/CN=ESnet Root CA 1
/DC=ch/DC=cern/CN=CERN Root CA
/DC=IN/DC=GARUDAINDIA/CN=Indian Grid Certification Authority
/C=DE/O=GermanGrid/CN=GridKa-CA
/C=SK/O=SlovakGrid/CN=SlovakGrid CA
/CN=SwissSign Bronze CA/emailAddress=bronze@swisssign.com/O=SwissSign/C=CH
/DC=EDU/DC=UTEXAS/DC=TACC/O=UT-AUSTIN/CN=TACC Classic CA
/C=BE/OU=BEGRID/O=BELNET/CN=BEgrid CA
/CN=SwissSign Silver CA/emailAddress=silver@swisssign.com/O=SwissSign/C=CH
/C=CH/O=Switch - Teleinformatikdienste fuer Lehre und Forschung/CN=SWITCH Server CA
/C=PK/O=NCP/CN=PK-GRID-CA
/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein Server CA Grid - G01
/C=HR/O=edu/OU=srce/CN=SRCE CA
Staging: staginginout
Job Descriptions:
ogf:jsdl:1.0
Queue information:
Mapping Queue: default
Max Total Jobs: 100
Max Running Jobs: 10
Max Waiting Jobs: 99
Max Pre LRMS Waiting Jobs: 0
Max User Running Jobs: 5
Max Slots Per Job: 1
Doesn't Support Preemption
Total Jobs: 0
Running Jobs: 0
Waiting Jobs: 0
Suspended Jobs: 0
Staging Jobs: 0
Pre-LRMS Waiting Jobs: 0
Free Slots: 10
Free Slots With Duration:
P68Y1M5DT3H14M7S: 10
Used Slots: 0
Requested Slots: 0
Manager information:
Resource Manager: torque
Doesn't Support Advance Reservations
Doesn't Support Bulk Submission
Total Physical CPUs: 10
Total Logical CPUs: 10
Total Slots: 10
Non-homogeneous Resource
Working area is nor shared among jobs
Working Area Total Size: 15
Working Area Free Size: 4
Working Area Life Time: P7D
Cache Area Total Size: 15
Cache Area Free Size: 4
Execution Environment information:
Execution environment is a physical machine
Execution environment does not support inbound connections
Execution environment does not support outbound connections
---
Please note that you can run similar arcinfo request against any ARC service
except for the echo service.
A-REX accepts jobs described in JSDL. Example JSDL jobs are provided
in $ARC_LOCATION/share/doc/ in files 'jsdl_simple.xml' and 'jsdl_stage.xml'. To
submit job to the A-REX service one may use the 'arcsub' command:
$ARC_LOCATION/bin/arcsub -c ARC1:https://localhost:60000/arex -f /usr/local/share/doc/arc/jsdl_simple.xml -j id.xml
If everything goes fine, somewhere in its output there should be a message
Martin Skou Andersen
committed
"Job submitted!", and a job identifier is obtained which will be stored
in 'id.xml' file. One can then query job state with the 'arcstat' utility:
Martin Skou Andersen
committed
$ARC_LOCATION/bin/arcstat id.xml
Martin Skou Andersen
committed
$ARC_LOCATION/bin/arcstat id.xml
Job status: Running/Finishing
Martin Skou Andersen
committed
$ARC_LOCATION/bin/arcstat id.xml
Job status: Finished/Finished
Some of the of A-REX client tools consists of arcsub, arcstat, arckill, arcget
Martin Skou Andersen
committed
and arcclean commands. For more information please see the man pages of those utilities.
Security and Authorization
==========================
ARC implements security related features through set of Security Handler and
Policy Decision Point components. Security Handler components are attached to
message processing components. Each Security Handler takes care of processing
own part of security information. Currently ARC comes with 2 Security Handlers:
* identity.map - associates client's identity with local (UNIX) identity. It
uses PDP components to choose local isentity and/or identity mapping algorithm.
* arc.authz - calls PDP components and combines obtained authorization
* delegation.collector - parses proxy policy from remote proxy certificate.
this Security Handler should be configured under tls mcc component.
* usernametoken.handler - implement the functioanlity ofws-security usernametoken
profile. It will generate usernametoken into soap header, or extract
usernametoken outof soap header and do authentication based on the
extracted usernametoken.
Among available PDP components there are
* allow - always returns positive result
* deny - always returns negative result
* simplelist.pdp - compares DN of user to those stored in a file.
* arc.pdp - compares request information parsed from message and policy
information specified in this pdp.
* pdpservice.invoker - it composes the request, puts request into soap message,
and invokes the remote pdp service to get the response soap which
includes authorization decision. the pdp service has similar
functionality with arc.pdp.
* delegation.pdp - compares request information parsed from message and policy
information specified in proxy certificate from remote side.
There are examples of A-REX service and echo service with Security Handlers being used.
They may be found at $ARC_LOCATION/share/doc/arc/arex_secure.xml and
$ARC_LOCATION/share/doc/arc/echo.xml
There is also a pdp service which implements the same functionality as arc.pdp. See
Specifically for arc.pdp and pdp service, a formatted policy with specific schema
should be managed, see $ARC_LOCATION/share/doc/arc/pdp_policy.xml.example and
$ARC_LOCATION/share/doc/arc/Policy.xsd for details.
For usernametoken handler, there is example about configuration on service side in
$ARC_LOCATION/share/doc/arc/echo.xml, you can run the echo service by using this configuration
file with usernametoken sechandler configuered. For the client side, the echo client
(src/client/echo)can use usernametoken sechandler to authenticate against echo service
(see README under src/client/echo); there is also a test program in
src/tests/echo/test_clientinterface.cpp which can be compiled and tested against echo
service with usernametoken sechandler configured.
Finding more information
========================
Many information about functionality and configuration of various components
may be found inside corresponding configuratrion XML schemas.
Contributing
============
The open source development of the ARC middleware is coordinated by
the NorduGrid Collaboration. Currently, the main contributor is the

Oxana Smirnova
committed
EMI project (www.eu-emi.eu), but the collaboration is open to new
members. Contributions from the community to the software and the
documentation is welcomed. Sources can be downloaded from the software
repository at download.nordugrid.org or the Subversion code repository at
svn.nordugrid.org.
The technical coordination group defines outstanding issues that have
to be addressed in the framework of the ARC development. Feature
requests and enhancement proposals are recorded in the Bugzilla bug
tracking system at bugzilla.nordugrid.org. For a more detailed
description, write access to the code repository and further
questions, write to the nordugrid-discuss mailing list (see
www.nordugrid.org for details). Ongoing and completed Grid Research
projects and student assignments related to the middleware are listed
on the NorduGrid Web site as well.
Support, documentation, mailing lists, contact
==============================================
User support and site installation assistance is provided via the
request tracking system available at nordugrid-support@nordugrid.org.
In addition, the NorduGrid runs several mailing lists, among which the
nordugrid-discuss mailing list is a general forum for all kind of
issues related to the ARC middleware. The Bugzilla problem tracking system
(bugzilla.nordugrid.org) accepts requests for features or enhancements,
and is the prime medium to track and report problems.
Research papers, overview talks, reference manuals, user guides,
installation instructions, conference presentations, FAQ and even
tutorial materials can be fetched from the documentation section of
www.nordugrid.org
Contact information is kept updated on the www.nordugrid.org web site.