Threading#

audience:developers lang:all

When used with C++, Tango used omniORB as underlying ORB. This CORBA implementation is a threaded implementation and therefore C++ Tango device servers and clients are multi-threaded processes.

Device server process#

A classical Tango device server without any connected clients has eight threads. These threads are:

  • The main thread waiting in the ORB main loop

  • Two ORB implementation threads (the POA thread)

  • The ORB scavenger thread

  • The signal thread

  • The heartbeat thread (needed by the Tango event system)

  • Two ZeroMQ implementation threads

On top of these eight threads, you have to add the thread(s) used by the polling threads pool. This number depends on the polling thread pool configuration and could be between 0 (no polling at all) and the maximun number of threads in the pool.

A new thread is started for each connected client. Device servers are mostly used to interface hardware which most of the time does not support multi-threaded access. Therefore, all remote calls executed from a client are serialized within the device server code by using mutual exclusion. See Serialization model within a device server on which serialization models are available. In order to limit the thread number, the underlying ORB (omniORB) is configured to shutdown threads dedicated to client if the connection is inactive for more than 3 minutes. To also limit thread number, the ORB is configured to create one thread per connection up to 55 threads. When this level is reached, omniORB automatically switches to a thread pool model in which all connections are listened to by a single thread that dispatches incoming calls to a thread pool of up to 100 threads. The active per-connection threads are kept until they exit. When the number of connections decreases down to 50, then omniORB switches back to per-connection model for new incoming connections. More information can be read on omniORB documentation § 6.4.

If you are using events, the event system for its internal heartbeat system periodically (every 200 seconds) sends a command to the admin device. As explained above, a thread is created to execute these command. The omniORB scavenger will terminate this thread before the next event system heartbeat command arrives. For example, if you have a device server with three connected clients using only event, the process thread number will permanently change between 8 and 11 threads.

In summary, the number of threads in a device server process can be evaluated with the following formula:

8 + k + m

k is the number of polling threads used from the polling threads pool and m is the number of threads used for connected clients.

Serialization model within a device server#

Four serialization models are available within a device server. These models protect all requests coming from the network but also requests coming from the polling thread. These models are:

  1. Serialization by device: All access to the same device is serialized. As an example, let’s take a device server implementing one class of device with two instances (dev1 and dev2). Two clients are connected to these devices (client1 and client2). Client2 will not be able to access dev1 if client1 is using it. Nevertheless, client2 is able to access dev2 while client1 access dev1 as there is one mutual exclusion object by device.

  2. Serialization by class: With non multi-threaded legacy software, the preceding scenario could generate problems. In this mode of serialization, client2 is not able to access dev2 while client1 access dev1 because dev2 and dev1 are instances of the same class as there is one mutual exclusion object by class.

  3. Serialization by process: This is one step further than the previous case. In this mode, only one client can access any device embedded within the device server at a time. There is only one mutual exclusion object for the whole process.

  4. No serialization: This is an exotic kind of serialization and should be used with extreme care and only with device which are fully thread safe. In this model, most of the device access is not serialized at all. Due to Tango internal structure, the get_attribute_config, set_attribute_config, read_attributes and write_attributes CORBA calls are still protected. Reading the device state and status via commands or via CORBA attribute is also protected.

By default, every Tango device server is in Serialization by device mode. A method of the Tango::Util class allows to change this default behavior.

 1  #include <tango.h>
 2
 3  int main(int argc,char *argv[])
 4  {
 5      try
 6      {
 7          auto *tg = Tango::Util::init(argc,argv);
 8
 9          tg->set_serial_model(Tango::BY_CLASS);
10
11          tg->server_init();
12
13          sd::cout << "Ready to accept request" << std::endl;
14          tg->server_run();
15      }
16      catch (std::bad_alloc&)
17      {
18           std::cout << "Can't allocate memory!!!" << std::endl;
19           std::cout << "Exiting" << std::endl;
20      }
21      catch (CORBA::Exception &e)
22      {
23           Tango::Except::print_exception(e);
24
25           std::cout << "Received a CORBA::Exception" << std::endl;
26           std::cout << "Exiting" << std::endl;
27      }
28
29      return 0;
30  }

The serialization model is set at line 11 before the server is initialized and the infinite loop is started. See the cppTango API documentation for all details on the methods set_serial_model and get_serial_model.

Attribute Serialization model#

Even with the serialization model described previously, in case of attributes carrying a large number of data and several clients reading this attribute, a device attribute serialization has to be followed. Without this level of serialization, for attributes using a shared buffer, a thread scheduling may happens while the device server process is in the CORBA layer transferring the attribute data on the network. Three serialization models are available for attribute serialization. The default is well adapted to nearly all cases. Nevertheless, if the user code manages several attribute’s data buffers or if it manages its own buffer protection by one way or another, it could be interesting to tune this serialization level. The available models are:

  1. Serialization by kernel: This is the default case. The kernel is managing the serialization

  2. Serialization by user: The user code is in charge of the serialization. This serialization is done by the use of an omni_mutex object. An omni_mutex is an object provided by the omniORB package. It is the user responsability to lock this mutex when appropriate and to give this mutex to the Tango kernel before leaving the attribute read method

  3. No serialization

By default, every Tango device attribute is in Serialization by kernel. Methods of the Tango::Attribute class allow to change the attribute serialization behavior and to give the user omni_mutex object to the kernel.

 1 void MyClass::init_device()
 2 {
 3    ...
 4    Tango::Attribute &att = dev_attr->get_attr_by_name("TheAttribute");
 5    att.set_attr_serial_model(Tango::ATTR_BY_USER);
 6    ....
 7
 8 }
 9
10 void MyClass::read_TheAttribute(Tango::Attribute &attr)
11 {
12    ....
13    the_mutex.lock();
14    ....
15    // Fill the attribute buffer
16    ....
17    attr.set_value(buffer,....);
18    attr->set_user_attr_mutex(&the_mutex);
19 }

The serialization model is set at line 6 in the init_device() method. The user omni_mutex is passed to the Tango kernel at line 22. This omni_mutex object is a device data member. See the cppTango API documentation for all details on the methods set_attr_serial_model and set_user_attr_mutex.

Client process#

Clients are also multi-threaded processes. The underlying C++ ORB (omniORB) tries to keep system resources to a minimum. To decrease process file descriptors usage, each connection to a server is automatically closed if it is idle for more than 2 minutes and automatically re-opened when needed. A dedicated thread is spawned by the ORB to manage this automatic connection closing (the ORB scavenger thread).

Threrefore, a Tango client has two threads which are:

  1. The main thread

  2. The ORB scavenger thread

If the client is using the event system and the event push-push model, it has to be a server for receiving the events. This increases the number of threads.

The client now has 6 threads which are:

  • The main thread

  • The ORB scavenger thread

  • Two ZeroMQ implementation threads

  • Two Tango event system related threads (KeepAliveThread and EventConsumer)