Hi,General solution for state/schema migration is under development and it might be released with Flink 1.6.0.Before that, you need to manually handle the state migration in your operator’s open method. Lets assume that your OperatorV1 has a state field “stateV1”. Your OperatorV2 defines field “stateV2”, which is incompatible with previous version. What you can do, is to add a logic in open method, to check:1. If “stateV2” is non empty, do nothing2. If there is no “stateV2”, iterate over all of the keys and manually migrate “stateV1” to “stateV2”In your OperatorV3 you could drop the support for “stateV1”.I have once implemented something like that here:https://github.com/pnowojski/
flink/blob/ bfc8858fc4b9125b8fc7acd03cb3f9 5c000926b2/flink-streaming- java/src/main/java/org/apache/ flink/streaming/runtime/ operators/windowing/ WindowOperator.java#L258Hope that helps!PiotrekOn 6 Jun 2018, at 17:04, TechnoMage <mlatta@xxxxxxxxxxxxxx> wrote:We are still pretty new to Flink and I have a conceptual / DevOps question.
When a job is modified and we want to deploy the new version, what is the preferred method? Our jobs have a lot of keyed state.
If we use snapshots we have old state that may no longer apply to the new pipeline.
If we start a new job we can reprocess historical data from Kafka, but that can be very resource heavy for a while.
Is there an option I am missing? Are there facilities to “patch” or “purge” selectively the keyed state?