PyTangoArchiving Recipes#

audience:developers lang:python

by Sergi Rubio

PyTangoArchiving is the python API for Tango Archiving.

This package allows to:

  • Integrate Hdb and Snap archiving with other python/PyTango tools.

  • Start/Stop Archiving devices in the appropiated order.

  • Increase the capabilities of configuration and diagnostic.

  • Import/Export .csv and .xml files between the archiving and the database.

Don’t edit this wiki directly, the source for this documentation is available at PyTangoArchiving UserGuide

Installing PyTangoArchiving:#

Repository is available on sourceforge:

1$ svn co https://svn.code.sf.net/p/tango-cs/code/archiving/tool/PyTangoArchiving/trunk

Dependencies:#

  1. Tango Java Archiving, ArchivingRoot from sourceforge,

  2. PyTango

  3. python-mysql

  4. Taurus (optional)

  5. fandango:

1$ svn co https://svn.code.sf.net/p/tango-cs/code/share/fandango/trunk/fandango fandango

Setup:#

  • Follow Tango Java Archiving installation document to setup Java Archivers and Extractors.

  • Some of the most common installation issues are solved in several topics in  Tango forums (search for Tdb/Hdb/Snap Archivers):

  • Install PyTango and MySQL-python using their own setup.py scripts.

  • fandango, and PyTangoArchiving parent folders must be added to your PYTHONPATH environment variable.

  • Although Java Extractors may be used, it is recommended to configure direct MySQL access for PyTangoArchiving

Accessing MySQL:#

Although not needed, I recommend you to create a new MySQL user for data querying:

 1$ mysql -u hdbmanager -p hdb
 2
 3$ GRANT USAGE ON hdb.* TO 'user'@'localhost' IDENTIFIED BY '**********';
 4$ GRANT USAGE ON hdb.* TO 'user'@'%' IDENTIFIED BY '**********';
 5$ GRANT SELECT ON hdb.* TO 'user'@'localhost';
 6$ GRANT SELECT ON hdb.* TO 'user'@'%';
 7
 8$ mysql -u tdbmanager -p tdb
 9
10$ GRANT USAGE ON tdb.* TO 'user'@'localhost' IDENTIFIED BY '**********';
11$ GRANT USAGE ON tdb.* TO 'user'@'%' IDENTIFIED BY '**********';
12$ GRANT SELECT ON tdb.* TO 'user'@'localhost';
13$ GRANT SELECT ON tdb.* TO 'user'@'%';

Check in a python shell that your able to access the database:

1import PyTangoArchiving
2
3PyTangoArchiving.Reader(db='hdb',config='user:password@hostname')

Then configure the Hdb/Tdb Extractor class properties to use this user/password for querying:

1import PyTango
2
3PyTango.Database().put_class_property('HdbExtractor',{'DbConfig':'user:password@hostname'})
4
5PyTango.Database().put_class_property('TdbExtractor',{'DbConfig':'user:password@hostname'})

You can test now access from a Reader (see recipes below) object or from a taurustrend/ArchivingBrowser UI (Taurus required):

1python PyTangoArchiving/widget/ArchivingBrowser.py

Download#

Download PyTangoArchiving from sourceforge:

1svn co https://svn.code.sf.net/p/tango-cs/code/archiving/tool/PyTangoArchiving/trunk

Submodules#

  • api,

    • getting servers/devices/instances implied in the archiving system and allowing

  • historic,

    • configuration and reading of historic data

  • snap,

    • configuration and reading of snapshot data,

  • xml,

    • conversion between xml and csv files

  • scripts,

    • configuration scripts

  • reader,

    • providing the useful Reader and ReaderProcess objects to retrieve archived data

General usage#

In all these examples you can use hdb or tdb just replacing one by the other

Get archived values for an attribute#

The reader object provides a fast access to archived values

In [9]: import PyTangoArchiving
In [10]: rd = PyTangoArchiving.Reader('hdb')
In [11]: rd.get_attribute_values('expchan/eh_emet02_ctrl/3/value','2013-03-20 10:00','2013-03-20 11:00')
Out[11]:
[(1363770788.0, 5.79643e-14),
 (1363770848.0, 5.72968e-14),
 (1363770908.0, 5.7621e-14),
 (1363770968.0, 6.46782e-14),
 ...

Start/Stop/Check attributes#

You must create an Archiving api object and pass to it the list of attributes with its archiving config:

 1import PyTangoArchiving
 2hdb = PyTangoArchiving.ArchivingAPI('hdb')
 3attrs = ['expchan/eh_emet03_ctrl/3/value','expchan/eh_emet03_ctrl/4/value']
 4
 5#Archive every 15 seconds if change> +/-1.0, else every 300 seconds
 6modes = {'MODE_A': [15000.0, 1.0, 1.0], 'MODE_P': [300000.0]}
 7
 8#If you omit the modes argument then archiving will be every 60s
 9hdb.start_archiving(attrs, modes)
10
11hdb.load_last_values(attrs)
12{'expchan/eh_emet02_ctrl/3/value': [[datetime.datetime(2013, 3, 20, 11, 38, 9),
13    7.27081e-14]],
14    'expchan/eh_emet02_ctrl/4/value': [[datetime.datetime(2013, 3, 20, 11, 39),
15    -3.78655e-08]]
16}
17
18hdb.stop_archiving(attrs)

Loading a .CSV file into Archiving#

The .csv file must have a shape like this one (any row starting with ‘#’ is ignored):

 1Host  Device  Attribute   Type    ArchivingMode   Periode >15  MinRange    MaxRange
 2
 3#This header lines are mandatory!!!
 4@LABEL  Unique ID
 5@AUTHOR Who?
 6@DATE   When?
 7@DESCRIPTION    What?
 8
 9#host   domain/family/member    attribute   HDB/TDB/STOP    periodic/absolute/relative
10
11cdi0404 LI/DI/BPM-ACQ-01    @DEFAULT        periodic    300
12                            ADCChannelAPeak HDB absolute    15  1   1
13                                            TDB absolute    5   1   1
14                            ADCChannelBPeak HDB absolute    15  1   1
15                                            TDB absolute    5   1   1
16                            ADCChannelCPeak HDB absolute    15  1   1
17                                            TDB absolute    5   1   1
18                            ADCChannelDPeak HDB absolute    15  1   1
19                                            TDB absolute    5   1   1

The command to insert it is:

1import PyTangoArchiving
2PyTangoArchiving.LoadArchivingConfiguration('/...fbecheri_20130319.csv','hdb',launch=True)

There are some arguments to modify Loading behavior.

launch:

if not explicitly True then archiving is not triggered, it just verifies that format of the file is Ok and attributes are available

force:

if False the loading will stop at first error, if True then it tries all attributes even if some failed

overwrite:

if False attributes already archived will be skipped.

Checking the status of the archiving#

 1hdb = PyTangoArchiving.ArchivingAPI('hdb')
 2hdb.load_last_values()
 3filter = "/" #Put here whatever you want to filter the attribute names
 4lates = [a for a in hdb if filter in a and hdb[a].archiver and hdb[a].modes.get('MODE_P') and hdb[a].last_date<(time.time()-(3600+1e-3*hdb[a].modes['MODE_P'][0]))]
 5
 6#Get the list of attributes that cannot be read from the control system (ask system responsibles)
 7unav = [a for a in lates if not fandango.device.check_attribute(a,timeout=6*3600)]
 8#Get the list of attributes that are not being archived
 9lates = sorted(l for l in lates if l not in unav)
10#Get the list of archivers not running properly
11bad_archs = [a for a,v in hdb.check_archivers().items() if not v]
12
13#Restarting the archivers/attributes that failed
14bads = [l for l in lates if hdb[l] not in bad_archs]
15astor = fandango.Astor()
16astor.load_from_devs_list(bad_archs)
17astor.restart_servers()
18hdb.restart_archiving(bads)

Restart of the whole archiving system#

1admin@archiving:> archiving_service.py stop-all
2...
3admin@archiving:> archiving_service.py start-all
4...
5admin@archiving:> archiving_service.py status
6
7#see archiving_service.py help for other usages

Using the Python API#

Start/Stop of an small (<10) list of attributes#

 1#Stopping ...
 2api.stop_archiving(['bo/va/dac/input','bo/va/dac/settings'])
 3
 4#Starting with periodic=60s ; relative=15s if +/-1% change
 5api.start_archiving(['bo/va/dac/input','bo/va/dac/settings'],{'MODE_P':[60000],'MODE_R':[15000,1,1]})
 6
 7#Restarting and keeping actual configuration
 8
 9attr_name = 'bo/va/dac/input'
10api.start_archiving([attr_name],api.attributes[attr_name].extractModeString())

Checking if a list of attributes is archived#

In [16]: hdb = PyTangoArchiving.api('hdb')
In [17]: sorted([(a,hdb.load_last_values(a)) for a in hdb if a.startswith('bl04')])
Out[17]:
[('bl/va/elotech-01/output_1',
  [[datetime.datetime(2010, 7, 2, 15, 53), 6.0]]),
 ('bl/va/elotech-01/output_2',
  [[datetime.datetime(2010, 7, 2, 15, 53, 11), 0.0]]),
 ('bl/va/elotech-01/output_3',
  [[datetime.datetime(2010, 7, 2, 15, 53, 23), 14.0]]),
 ('bl/va/elotech-01/output_4',
  [[datetime.datetime(2010, 7, 2, 15, 52, 40), 20.0]]),
...

Getting information about attributes archived#

Getting the total number of attributes:#

1import PyTangoArchiving
2api = PyTangoArchiving.ArchivingAPI('hdb')
3len(api.attributes) #All the attributes in history
4len([a for a in api.attributes.values() if a.archiving_mode]) #Attributes configured

Getting the configuration of attribute(s):#

1#Getting as string
2modes = api.attributes['rs/da/bpm-07/CompensateTune'].archiving_mode
3
4#Getting it as a dict
5api.attributes['sr/da/bpm-07/CompensateTune'].extractModeString()
6
7#OR
8PyTangoArchiving.utils.modes_to_dict(modes)

Getting the list of attributes not updated in the last hour:#

1failed = sorted(api.get_attribute_failed(3600).keys())

Getting values for an attribute:#

1import PyTangoArchiving,time
2
3reader = PyTangoArchiving.Reader() #An HDB Reader object using HdbExtractors
4#OR
5reader = PyTangoArchiving.Reader(db='hdb',config='pim:pam@pum') #An HDB reader accessing to MySQL
6
7attr = 'bo04/va/ipct-05/state'
8dates = time.time()-5*24*3600,time.time() #5days
9values = reader.get_attribute_values(attr,*dates) #it returns a list of (epoch,value) tuples

Exporting values from a list of attributes as a text (csv / ascii) file#

 1from PyTangoArchiving import Reader
 2rd = Reader(db='hdb') #If HdbExtractor.DbConfig property is set one argument is enough
 3attrs = [
 4         'bl11-ncd/vc/eps-plc-01/pt100_1',
 5         'bl11-ncd/vc/eps-plc-01/pt100_2',
 6        ]
 7
 8#If you ignore text argument you will get lists of values, if text=True then you get a tabulated file.
 9ascii_values = rd.get_attributes_values(attrs,
10                      start_date='2010-10-22',stop_date='2010-10-23',
11                      correlate=True,text=True)
12
13print ascii_values
14
15#Save it as .csv if you want ...
16open('myfile.csv','w').write(ascii_values)

Filtering State changes for a device#

 1import PyTangoArchiving as pta
 2rd = pta.Reader('hdb','...:...@...')
 3vals = rd.get_attribute_values('bo02/va/ipct-02/state','2010-05-01 00:00:00','2010-07-13 00:00:00')
 4bads = []
 5for i,v in enumerate(vals[1:]):
 6    if v[1]!=vals[i-1][1]:
 7        bads.append((v[0],vals[i-1][1],v[1]))
 8report = [(time.ctime(v[0]),str(PyTango.DevState.values[int(v[1])] if v[1] is not None else 'None'),str(PyTango.DevState.values[int(v[2])] if v[2] is not None else 'None')) for v in bads]
 9
10report =
11[('Sat May  1 00:07:03 2010', 'UNKNOWN', 'ON'),
12...

Getting a table with last values for all attributes of a same device#

 1hours = 1
 2device = 'bo/va/ipct-05'
 3attrs = [a for a in reader.get_attributes() if a.lower().startswith(device)]
 4vars = dict([(attr,reader.get_attribute_values(attr,time.time()-hours*3600)) for attr in attrs])
 5table = [[time.ctime(t0)]+
 6         [([v for t,v in var if t<=t0] or [None])[-1] for attr,var in sorted(vars.items())]
 7        for t0,v0 in vars.values()[0]]
 8print('\n'.join(
 9      ['\t'.join(['date','time']+[k.lower().replace(device,'') for k in sorted(vars.keys())])]+
10      ['\t'.join([str(s) for s in t]) for t in table]))

Using CSV files#

Loading an HDB/TDB configuration file#

Create dedicated archivers first#

If you want to use this option it will require some RAM resources in the host machine (64MbRAM/250Attributes) and installing the ALBA-Archiving bliss package.

1from PyTangoArchiving.files import DedicateArchiversFromConfiguration
2DedicateArchiversFromConfiguration('LX_I_Archiving.csv','hdb',launch=True)

TDB Archiving works different as it shouldn’t be working on diskless machines, using instead a centralized host for all archiver devices.

1DedicateArchiversFromConfiguration('LX_I_Archiving.csv','tdb',centralized='archiving01',launch=True)

Loading the .csv files#

All the needed code to do it is:

 1import PyTangoArchiving
 2
 3#With launch=False this function will do a full check of the attributes and print the results
 4PyTangoArchiving.LoadArchivingConfiguration('/data/Archiving//LX_I_Archiving_.csv','hdb',launch=False)
 5
 6#With launch=True configuration will be recorded and archiving started
 7PyTangoArchiving.LoadArchivingConfiguration('/data/Archiving//LX_I_Archiving_.csv','hdb',launch=True)
 8
 9#To force archiving of all not-failed attributes
10PyTangoArchiving.LoadArchivingConfiguration('/data/Archiving//LX_I_Archiving_.csv','hdb',launch=True,force=True)
11
12#Starting archiving in TDB mode (kept 5 days only)
13PyTangoArchiving.LoadArchivingConfiguration('/data/Archiving//LX_I_Archiving_.csv','tdb',launch=True,force=True)

Note

You must take in account the following conditions:

  • Names of attributes must match the NAME, not the LABEL! (that’s a common mistake)

  • Devices providing the attributes must be running when you setup archiving.

  • Regular expressions are NOT ALLOWED (I know previous releases allowed it, but never worked really well)

filtering a list of CSV configurations / attributes to load#

You can use GetConfigFiles and filters/exclude to select a predefined list of attributes

 1import PyTangoArchiving as pta
 2
 3filters = {'name':".*"}
 4exclude = {'name':"(s.*bpm.*)|(s10.*rf.*)|(s14.*rf.*)"}
 5
 6#TDB
 7confs = pta.GetConfigFiles(mask='.*(RF|VC).*')
 8for target in confs:
 9    pta.LoadArchivingConfiguration(target,launch=True,force=True,overwrite=True,dedicated=False,schema='tdb',filters=filters,exclude=exclude)
10
11#HDB
12confs = pta.GetConfigFiles(mask='.*BO.*(RF|VC).*')
13for target in confs:
14    pta.LoadArchivingConfiguration(target,launch=True,force=True,overwrite=True,dedicated=True,schema='hdb',filters=filters,exclude=exclude)

Comparing a CSV file with the actual configuration#

1import PyTangoArchiving
2api = PyTangoArchiving.ArchivingAPI('hdb')
3config = PyTangoArchiving.ParseCSV('Archiving_RF_.csv')
4
5for attr,conf in config.items():
6    if attr not in api.attributes or not api.attributes[attr].archiving_mode:
7        print '%s not archived!' % attr
8    elif PyTangoArchiving.utils.modes_to_string(api.check_modes(conf['modes']))!=api.attributes[attr].archiving_mode:
9        print '%s: %s != %s' %(attr,PyTangoArchiving.utils.modes_to_string(api.check_modes(conf['modes'])),api.attributes[attr].archiving_mode)

Checking and restarting a known system from a .csv#

 1import PyTangoArchiving.files as ptaf
 2borf = '/data/Archiving/BO_20100603_v2.csv'
 3config = ptaf.ParseCSV(borf)
 4import PyTangoArchiving.utils as ptau
 5hdb = PyTangoArchiving.ArchivingAPI('hdb')
 6
 7missing = [
 8    'bo/ra/fim-01/remotealarm',
 9    'bo/ra/fim-01/rfdet1',
10    'bo/ra/fim-01/rfdet2',
11    'bo/ra/fim-01/arcdet5',
12    'bo/ra/fim-01/rfdet3',
13    'bo/ra/fim-01/arcdet3',
14    'bo/ra/fim-01/arcdet2',
15    'bo/ra/fim-01/vacuum']
16
17ptau.check_attribute('bo/ra/fim-01/remotealarm')
18missing = 'bo/ra/fim-01/arcdet4|bo/ra/fim-01/remotealarm|bo/ra/fim-01/rfdet1|bo/ra/fim-01/rfdet2|bo/ra/fim-01/arcdet5|bo/ra/fim-01/rfdet3|bo/ra/fim-01/arcdet3|bo/ra/fim-01/arcdet2|bo/ra/fim-01/vacuum'
19
20ptaf.LoadArchivingConfiguration(borf,filters={'name':missing},launch=True)
21ptaf.LoadArchivingConfiguration(borf,filters={'name':'bo/ra/eps-plc.*'},stop=True,force=True)
22ptaf.LoadArchivingConfiguration(borf,filters={'name':'bo/ra/eps-plc.*'},launch=True,force=True)
23
24rfplc = ptaf.ParseCSV(borf,filters={'name':'bo/ra/eps-.*'})
25stats = ptaf.CheckArchivingConfiguration(borf,period=300)