1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
|
- one thread per potential and actual connection, doing accept/connect and
all operations like read/write to socket/file/pipe whatever and does
all the processing.
+ simple application code
- CPU/thread mismatch, in generall we get too many threads compared to
the number of available CPUs, thus higher costs for task switches
- shared resources need more synchronization and can become hot spots
- threading is different, and sometimes not available, networking should
be independent on the fact whether we want to use threads or not
(orthogonal design)
- Reactor, Acceptor/Connector registers to reactor (only for the readyness),
central event dispatcher
+ one thread, good as no sycnhronization needed, but can't be easily
scaled (especially fairly!)
- complicated code, cooperative system, quite easy to get DoSAs this way,
everything has to bee non-blocking, this can cause portability problem:
we take asynchronizity if available as performance optimization, but
it's bad to rely on it (see tests with /dev/zero on some Unixes and
especially Windows which doesn't support Asio on some devices)
- POSIX select is hard to split accross many threads, so the risk is to
get a central thread as bottleneck
- portability can be a problem: we have to port the proactor and the
event handlers (think reading from a file asynchronusly on Unix
and Windows)
- Proactor: one to many threads report completion of asynchonous events,
we can have parallel I/O operations without the need of threads
- debugging is hard, but as we gona do finite state machines for
the protocol anyway, this is not really an issue (design and verify
correctness, don't debug!)
- synchronous events must be made asynchronous
+ ASIO is THE model on Windows for all files (and almost everything is
a file there), also with upcoming POSIX asio and some Unix-specific
asio models (like Solaris /dev/poll) this is the future architecture
anyway.
+ better isolation of problems: "asynchronous O/S-near operations" and
"application operations", concurrency strategies should be a parallel
aspect to the problem
+ better OS-abstraction possible as with reactor or multi-thread model,
especially calls like async_write, signals etc. are platform-dependend
- cancelation of running operations and priorization may be a little bit
tricky
- Mmh. This gives us the idea that the networking layer and some other
components register to the Proactor system
- Proactor
Connector/Acceptor
- decouple communication establishment/interruption which happens orthogonal
to the protocol communication above
- connection esablishment can also be unicast, broadcast, multicast, this
could come handy
- different life-time, connect/accepts don't change so often as protocol
to implement
- differ between connection roles (which are always client/server) and
communication roles (which can be the same or reversed, can change during
a communication session or can be completly peer-to-peer)
- connect and accept vary heavily between Unixes and Windows
Connector and Acceptor abstract factories with concrete factories and
connections as for TCP/IP
- mmh. how is closing of connections handled? errors, etc.
|