You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a Replica receives a batch, it iterates the batch's requests and sequentially hands them over to ServiceProxy for execution. The ServiceProxy is responsible for assigning every request an unique, monotonically increasing, sequence number before delegating request execution to the Service implementation.
Problem
The ServiceProxy implementation uses an int field which it increments for every request. While this is fine under a single-threaded assumption, we have modified Replica to hand over requests in parallel to ServiceProxy for execution. This breaks the current sequence number assignment implementation.
Proposed Solution
The ServiceProxy is originally used in the following manner, as depicted in the left part of the diagram below. Consider that batches consist of three requests. There are three sequential calls to execute(req) to execute the batch's requests, followed by an invocation of instanceExecuted(instance) that marks the end of the (executed) batch agreed in the instance Paxos instance.
With our change to process the requests in a batch in parallel, the ServiceProxy is now used as the right part of the diagram bellow. We have augmented ServiceProxywith an execute(req, batch_pos) method which specifies the batch position---batch_pos--- of the request, and this method is called in parallel by an ExecutorService. However, only after the batch has been entirely processed does instanceExecuted get invoked.
e e e e
| \ | /
e \|/
| fence
e /|\
| / | \
fence e e e
| \ | /
e \|/
| fence
e /|\
| ...
e
|
fence
|
...
e: ServiceProxy.execute(req{, batch_pos})
fence: ServiceProxy.instanceExecuted(instance)
I propose the following solution which does not require any kind of thread synchronisation:
We enrich the execute method with the batch size, i.e.,execute(req, batch_pos, batch_sz). This way ServiceProxy can maintain the sum of the batch sizes up until this batch---seq_num_base.
The batch_posth request in the batch is assigned the sequence number seq_num_base + batch_pos. When instanceExecuted is called, the batch has been processed so we update seq_num_base := seq_num_base + batch_sz.
The text was updated successfully, but these errors were encountered:
ClientBatchManager: keeps track of number of requests per instance.
Replica: relays the information above to ServiceProxy.
ServiceProxy: thread-safe sequence number generation.
{,Simplified}Service: revert changes.
PagerService: Pager-specific service interface.
Context
When a
Replica
receives a batch, it iterates the batch's requests and sequentially hands them over toServiceProxy
for execution. TheServiceProxy
is responsible for assigning every request an unique, monotonically increasing, sequence number before delegating request execution to theService
implementation.Problem
The
ServiceProxy
implementation uses anint
field which it increments for every request. While this is fine under a single-threaded assumption, we have modifiedReplica
to hand over requests in parallel toServiceProxy
for execution. This breaks the current sequence number assignment implementation.Proposed Solution
The
ServiceProxy
is originally used in the following manner, as depicted in the left part of the diagram below. Consider that batches consist of three requests. There are three sequential calls toexecute(req)
to execute the batch's requests, followed by an invocation ofinstanceExecuted(instance)
that marks the end of the (executed) batch agreed in theinstance
Paxos instance.With our change to process the requests in a batch in parallel, the
ServiceProxy
is now used as the right part of the diagram bellow. We have augmentedServiceProxy
with anexecute(req, batch_pos)
method which specifies the batch position---batch_pos
--- of the request, and this method is called in parallel by anExecutorService
. However, only after the batch has been entirely processed doesinstanceExecuted
get invoked.e
:ServiceProxy.execute(req{, batch_pos})
fence
:ServiceProxy.instanceExecuted(instance)
I propose the following solution which does not require any kind of thread synchronisation:
We enrich the
execute
method with the batch size, i.e.,execute(req, batch_pos, batch_sz)
. This wayServiceProxy
can maintain the sum of the batch sizes up until this batch---seq_num_base
.The
batch_pos
th request in the batch is assigned the sequence numberseq_num_base + batch_pos
. WheninstanceExecuted
is called, the batch has been processed so we updateseq_num_base := seq_num_base + batch_sz
.The text was updated successfully, but these errors were encountered: