Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServiceProxy correctness using parallel batch execution #1

Closed
tvale opened this issue Oct 26, 2013 · 2 comments
Closed

ServiceProxy correctness using parallel batch execution #1

tvale opened this issue Oct 26, 2013 · 2 comments
Assignees
Labels

Comments

@tvale
Copy link

tvale commented Oct 26, 2013

Context

When a Replica receives a batch, it iterates the batch's requests and sequentially hands them over to ServiceProxy for execution. The ServiceProxy is responsible for assigning every request an unique, monotonically increasing, sequence number before delegating request execution to the Service implementation.

Problem

The ServiceProxy implementation uses an int field which it increments for every request. While this is fine under a single-threaded assumption, we have modified Replica to hand over requests in parallel to ServiceProxy for execution. This breaks the current sequence number assignment implementation.

Proposed Solution

The ServiceProxy is originally used in the following manner, as depicted in the left part of the diagram below. Consider that batches consist of three requests. There are three sequential calls to execute(req) to execute the batch's requests, followed by an invocation of instanceExecuted(instance) that marks the end of the (executed) batch agreed in the instance Paxos instance.
With our change to process the requests in a batch in parallel, the ServiceProxy is now used as the right part of the diagram bellow. We have augmented ServiceProxywith an execute(req, batch_pos) method which specifies the batch position---batch_pos--- of the request, and this method is called in parallel by an ExecutorService. However, only after the batch has been entirely processed does instanceExecuted get invoked.

  e                e  e  e
  |                 \ | /
  e                  \|/
  |                 fence
  e                  /|\
  |                 / | \
fence              e  e  e
  |                 \ | /
  e                  \|/
  |                 fence
  e                  /|\
  |                  ...
  e
  |
fence
  |
 ...
  • e: ServiceProxy.execute(req{, batch_pos})
  • fence: ServiceProxy.instanceExecuted(instance)

I propose the following solution which does not require any kind of thread synchronisation:
We enrich the execute method with the batch size, i.e.,execute(req, batch_pos, batch_sz). This way ServiceProxy can maintain the sum of the batch sizes up until this batch---seq_num_base.
The batch_posth request in the batch is assigned the sequence number seq_num_base + batch_pos. When instanceExecuted is called, the batch has been processed so we update seq_num_base := seq_num_base + batch_sz.

@ghost ghost assigned tvale Oct 29, 2013
tvale added a commit that referenced this issue Oct 29, 2013
ClientBatchManager: keeps track of number of requests per instance.
Replica: relays the information above to ServiceProxy.
ServiceProxy: thread-safe sequence number generation.
{,Simplified}Service: revert changes.
PagerService: Pager-specific service interface.
@tvale
Copy link
Author

tvale commented Oct 29, 2013

Fixed in 1b28a14.
However we need to assess if we have broken the snapshotting functionality needed for crash recovery.

@tvale
Copy link
Author

tvale commented Nov 3, 2013

In 78415f7 the sequence number assignment in ServiceProxy was changed to a synchronised implementation.
#2 still applies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant