Boost C++ Libraries: Ticket #10009: Boost Asio should provide also thread-safe sockets https://svn.boost.org/trac10/ticket/10009 <p> The feature I'm requesting is strictly related to the well-known problem reported in ticket <a class="closed ticket" href="https://svn.boost.org/trac10/ticket/8538" title="#8538: Bugs: asio: race condition for epoll &amp; kpoll (closed: invalid)">#8538</a>, but I'm not reporting it as a bug since the documentation explicitly states that the Asio's sockets are <em>not</em> thread-safe <em>by design</em>: the user <em>must</em> use strands at top level to ensure a correct behavior in a multi-threaded environment. If I've correctly understood, the rationale for such design is that in a single threaded environment, one should pay no cost for synchronization. I disagree with this design decision for several reasons. </p> <h2 class="section" id="Themulti-threadedenvironmentsaremoreimportant">The multi-threaded environments are more important</h2> <p> I believe that the environments where asio is seriously (heavily) used are multi-threaded (there is an asio-based thread pool): this means that the optimization is made for the simpler cases (where the cost for a not-contended lock acquisition is negligible); IMHO this is wrong. This adds an additional (significative) overhead to users which have necessarily to: </p> <ol><li>use strands (through strand::wrap() as suggested in the docs) <em>or</em> </li><li>use mutexes (or other lock types) and carry always a reference to them with the relative socket <em>or</em> </li><li>wrap asio's socket with a custom class that internally uses locking. </li></ol><h2 class="section" id="strand::wrapisnotalwaysthatuseful">strand::wrap() is not always that useful</h2> <p> I believe that the solution officially recommended to "solve the problem" (wrapping the callbacks with strand::wrap()) is theoretically a bad idea since it causes the serialization (which is <em>almost</em> like what a mutex does) of the callbacks invocation: this is <em>deeply</em> wrong because it is an anti-pattern: the fundamental principle of synchronization is that ONLY the data has to be "locked", NOT the code. Using strand::wrap() ( ~= locking the CODE ) could lead to serious performance bottlenecks if the callbacks' code is not <em>carefully</em> written (one could unconsciously do a heavy computation (or sync I/O) through several levels of function calls). So, within this model, writing an efficient code will mean to be careful and to necessarily re-schedule all non-trivial tasks: that is not so easy sometimes.<br /> What if, for example, one wants to close 100 sockets in a callback? Using strand::wrap() would mean to use a common strand for all the 100 sockets (= to serialize all the callbacks for these sockets): this is an <em>unacceptable</em> cost to pay. Instead, using a strand for each socket and rescheduling a close() request using strand::post() is much better but would mean always carrying back a reference of the relative strand for each socket: this is same problem of the alternative 2: no one would be happy to do this.<br /> It seems that the most attractive alternative is the third one. </p> <h2 class="section" id="Thenon-thread-safesocketsarenotsomethingonewouldexpectfromAsio">The non-thread-safe sockets are not something one would expect from Asio</h2> <p> I'm pretty sure that I'm not the only one who, before reading <em>carefully</em> the documentation, was expecting that the sockets were <em>of course</em> thread-safe: we are dealing with a high-level library for asynchronous I/O. The situation is analogue to C++11's atomics: they, by default, use FULL barriers because this way is simpler to use them; advanced users (that really know what they're doing) instead, could always specify the exact type of barrier they need. I think Asio is roughly at the same level of abstraction of standard C++ libraries (if not at higher one), so I think it should follow the same philosophy. </p> <h2 class="section" id="SynchronizationisalreadyusedinAsiosinternals">Synchronization is already used in Asio's internals</h2> <p> Even if the official docs declare the asio's sockets as non-thread-safe, actually <em>there are</em> synchronization mechanisms in asio's reactors (epoll and kqueue) which allow the sockets to <em>almost work</em> in a multi-threaded environment except when destructive operations like close() are used. In these cases, the subtle race described in <a class="closed ticket" href="https://svn.boost.org/trac10/ticket/8538" title="#8538: Bugs: asio: race condition for epoll &amp; kpoll (closed: invalid)">#8538</a> appears since the mutex protecting the access to the descriptor_data node <em>has to be</em> released before "freeing it" because the mutex itself is a member of the (same) node. That mutex makes me suspect that <em>maybe</em> an attempt has been made to solve the synchronization problem at the reactor level but it failed since the socket's lifetime is bigger than the descriptor_data node which has to die when the descriptor is closed. Since the reactors has no access to the upper level socket services (such thing will break the isolation of the abstraction layers), the only solution seems to use a top-level synchronization mechanism. </p> <h2 class="section" id="WhatismypurposetoimproveAsio">What is my purpose to improve Asio</h2> <p> I purpose a simple patch (the attached file) that adds a lock contended only among the async and the destructive operations at the top-level classes (the *_socket_service classes). I think this solution is a good compromise because all the code it adds is under an #ifdef: who wants to work easily with asio's sockets has simply to specify it at compile-time; the others could still pay the "zero" cost in single-threaded applications. Even if I'd prefer the define to work in the opposite way, I'm purposing this change because has a minor impact, conserves the old behavior unless specified and so, I hope it has a greater chance to be accepted. </p> <h2 class="section" id="Justafinalremark">Just a final remark</h2> <p> I'd just like to say that, even if it probably doesn't emerge from my comments, I'm an Asio's fan: overall, it is really a great boost-style library and I use it intensively. That's the reason why I'm spending some much energies trying to improve it. </p> en-us Boost C++ Libraries /htdocs/site/boost.png https://svn.boost.org/trac10/ticket/10009 Trac 1.4.3 Vladislav Valtchev <vladi32@…> Tue, 06 May 2014 14:52:09 GMT attachment set https://svn.boost.org/trac10/ticket/10009 https://svn.boost.org/trac10/ticket/10009 <ul> <li><strong>attachment</strong> → <span class="trac-field-new">thread_safe_socket_services.patch</span> </li> </ul> Ticket anonymous Mon, 14 Nov 2016 08:57:21 GMT <link>https://svn.boost.org/trac10/ticket/10009#comment:1 </link> <guid isPermaLink="false">https://svn.boost.org/trac10/ticket/10009#comment:1</guid> <description> <p> Strands don't make ASIO thread safe by the way. It only makes the callbacks thread safe. You still can't read/write from multiple threads, even when calling from within a stranded callback. I've made several tests to prove this to myself, and like you I am very disappointed and kind of baffled. </p> </description> <category>Ticket</category> </item> <item> <dc:creator>anonymous</dc:creator> <pubDate>Thu, 01 Dec 2016 09:09:52 GMT</pubDate> <title/> <link>https://svn.boost.org/trac10/ticket/10009#comment:2 </link> <guid isPermaLink="false">https://svn.boost.org/trac10/ticket/10009#comment:2</guid> <description> <p> Replying to <a class="ticket" href="https://svn.boost.org/trac10/ticket/10009#comment:1" title="Comment 1">anonymous</a>: </p> <blockquote class="citation"> <p> Strands don't make ASIO thread safe by the way. It only makes the callbacks thread safe. You still can't read/write from multiple threads, even when calling from within a stranded callback. I've made several tests to prove this to myself, and like you I am very disappointed and kind of baffled. </p> </blockquote> <p> If you use always the same strand for a requesting async reads/writes, it is thread-safe beacuse the strand itself causes functors wrapped with it to execute <em>sequentially</em>. The problem is that you cannot simply issue a request from <em>outside</em> a callback in a thread-safe way: if thread X received a WIN32 event for example (with its handler not wrapped by a strand of course), and it wants to do an async_read() on socket S, than it has necessarily to post in the strand of socket S a functor requesting the async_read() and having the callback wrapped with the same strand as well. So, the point is that even <strong>the requests for async operations have to be async</strong>, unless they come from code that is <em>already executed</em> within a strand. In practice, that causes a significant re-scheduling overhead and, depending from the point of view, less-readable code. </p> </description> <category>Ticket</category> </item> </channel> </rss>