Hi Dawid, Piotr,
I see that for the kafka consumer base there are some migration tests here:
As the kafka consumer state is managed on the FlinkKafkaConsumerBase class I assumed this would cover also migration of connectors versions, but maybe I'm missing something (?)
I performed some tests on my own and the migration of the kafka consumer connector worked.
Regarding the kafka producer I am just updating the job with the new connector and removing the previous one and upgrading the job by using a savepoint and the --allowNonRestoredState.
So far my tests with this option are successful.
I appreciate any help here to clarify my understanding.
AFAIK we do not support migrating state from one connector to another
one, which is in fact the case for kafka 0.11 and the "universal" one.
You might try to use the project bravo to migrate the state manually,
but unfortunately you have to understand the internals of both of the
connectors. I pull also Piotr to the thread, maybe he can provide more
On 18/12/2018 14:33, Edward Rojas wrote:
> I'm planning to migrate from kafka connector 0.11 to the new universal kafka
> connector 1.0.0+ but I'm having some troubles.
> The kafka consumer seems to be compatible but when trying to migrate the
> kafka producer I get an incompatibility error for the state migration.
> It looks like the producer uses a list state of type
> "NextTransactionalIdHint", but this class is specific for each Producer
> (FlinkKafkaProducer011.NextTransactionalIdHint vs
> FlinkKafkaProducer.NextTransactionalIdHint) and therefore the states are not
> I would like to know what is the recommended way to perform this kind of
> migration without losing the state ?
> Thanks in advance,
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/