osdir.com

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Evolving the client protocol


Do we have driver authors who wish to support both projects?

On Wed, Apr 18, 2018 at 9:25 AM, Jeff Jirsa <jjirsa@xxxxxxxxx> wrote:

> Removed other lists (please don't cross post)
>
>
>
>
>
> On Wed, Apr 18, 2018 at 3:47 AM, Avi Kivity <avi@xxxxxxxxxxxx> wrote:
>
> > Hello Cassandra developers,
> >
> >
> > We're starting to see client protocol limitations impact performance, and
> > so we'd like to evolve the protocol to remove the limitations. In order
> to
> > avoid fragmenting the driver ecosystem and reduce work duplication for
> > driver authors, we'd like to avoid forking the protocol. Since these
> issues
> > affect Cassandra, either now or in the future, I'd like to cooperate on
> > protocol development.
> >
> >
> > Some issues that we'd like to work on near-term are:
> >
> >
> > 1. Token-aware range queries
> >
> >
> > When the server returns a page in a range query, it will also return a
> > token to continue on. In case that token is on a different node, the
> client
> > selects a new coordinator based on the token. This eliminates a network
> hop
> > for range queries.
> >
> >
> > For the first page, the PREPARE message returns information allowing the
> > client to compute where the first page is held, given the query
> parameters.
> > This is just information identifying how to compute the token, given the
> > query parameters (non-range queries already do this).
> >
> >
> > https://issues.apache.org/jira/browse/CASSANDRA-14311
> >
> >
> > 2. Per-request timeouts
> >
> >
> > Allow each request to have its own timeout. This allows the user to set
> > short timeouts on business-critical queries that are invalid if not
> served
> > within a short time, long timeouts for scanning or indexed queries, and
> > even longer timeouts for administrative tasks like TRUNCATE and DROP.
> >
> >
> > https://issues.apache.org/jira/browse/CASSANDRA-2848
> >
> >
> > 3. Shard-aware driver
> >
> >
> > This admittedly is a burning issue for ScyllaDB, but not so much for
> > Cassandra at this time.
> >
> >
> > In the same way that drivers are token-aware, they can be shard-aware -
> > know how many shards each node has, and the sharding algorithm. They can
> > then open a connection per shard and send cql requests directly to the
> > shard that will serve them, instead of requiring cross-core communication
> > to happen on the server.
> >
> >
> > https://issues.apache.org/jira/browse/CASSANDRA-10989
> >
> >
> > I see three possible modes of cooperation:
> >
> >
> > 1. The protocol change is developed using the Cassandra process in a JIRA
> > ticket, culminating in a patch to doc/native_protocol*.spec when
> consensus
> > is achieved.
> >
> >
> > The advantage to this mode is that Cassandra developers can verify that
> > the change is easily implementable; when they are ready to implement the
> > feature, drivers that were already adapted to support it will just work.
> >
> >
> > 2. The protocol change is developed outside the Cassandra process.
> >
> >
> > In this mode, we develop the change in a forked version of
> > native_protocol*.spec; Cassandra can still retroactively merge that
> change
> > when (and if) it is implemented, but the ability to influence the change
> > during development is reduced.
> >
> >
> > If we agree on this, I'd like to allocate a prefix for feature names in
> > the SUPPORTED message for our use.
> >
> >
> > 3. No cooperation.
> >
> >
> > This requires the least amount of effort from Cassandra developers (just
> > enough to reach this point in this email), but will cause duplication of
> > effort for driver authors who wish to support both projects, and may
> cause
> > Cassandra developers to redo work that we already did.
> >
> >
> > Looking forward to your views.
> >
> >
> > Avi
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> > For additional commands, e-mail: dev-help@xxxxxxxxxxxxxxxxxxxx
> >
> >
>