[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: FlinkKinesisProducer weird behaviour


Thanks Piotr for your response.

I've further investigated the issue and found the root cause.

There are 2 possible ways to produce/consume records to/from Kinesis:
  1. Using the Kinesis Data Streams service API directly
  2. Using the KCL & KPL.
The FlinkKinesisProducer uses the AWS KPL to push records into Kinesis, for optimized performance. One of the features of the KPL is Aggregation, meaning that it batches many UserRecords into one Kinesis Record to increase producer throughput.
The thing is, that consumers of that stream needs to be aware that the records being consumed are aggregated and handle it accordingly [1][2].

In my case, the output stream is being consumed by Druid. So the consumer code is not in my control...
So my choices are to disable the Aggregation feature by passing aggregationEnable: false in the kinesis configuration or writing my own custom consumer for Druid.

I think that we should state this as part of the documentation for Flink Kinesis Connector.

[1] https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-consumer-deaggregation.html
[2] https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-integration.html


On Thu, May 24, 2018 at 11:18 AM, Piotr Nowojski <piotr@xxxxxxxxxxxxxxxxx> wrote:

Have you tried to write the same records, with exactly the same configuration to the Kinesis, but outside of Flink (with some standalone Java application)?


On 24 May 2018, at 09:40, Rafi Aroch <rafi.aroch@xxxxxxxxx> wrote:


We're using Kinesis as our input & output of a job and experiencing parsing exception while reading from the output stream. All streams contain 1 shard only.

While investigating the issue I noticed a weird behaviour where records get a PartitionKey I did not assign and the record Data is being wrapped with random illegal chars.

I wrote a very basic program to try to isolate the problem, but still I see this happening:
  • I wrote a simple SourceFunction which generates messages of the pattern - <sequence#>-AAAAAAAAAAA\n
  • FlinkKinesisProducer writes the messages the Kinesis stream with a default partitionKey of "0" - so I expect ALL records to have partitionKey of "0"
To verify the records in the Kinesis stream I use AWS CLI get-records API and see the following:

            "SequenceNumber": "49584735873509122272926425626745413182361252610143420418",
            "ApproximateArrivalTimestamp": 1527144766.662,
            "PartitionKey": "a"
            "SequenceNumber": "49584735873509122272926425626746622108180867308037603330",
            "ApproximateArrivalTimestamp": 1527144766.86,
            "PartitionKey": "0"

Where did PartitionKey "a" come from?

Further more, if you Base64 decode the record data of the records you see that all records written with this PartitionKey "a" are wrapped with weird illegal characters.
For example:


While the records with PartitionKey "0" look good:


I tried using both 1.4.2 version & 1.6-SNAPSHOT and still see the issue...

Am I missing anything? Has anyone encountered such issue?

Would appreciate any help,