Re: how to override s3 key config in flink job
File system factories are class-loaded in running JVMs of task executors.
That is why their configured objects are shared by different Flink jobs.
It is not possible to change their options per created file system and per job at the moment.
This could be changed, e.g. for s3, by providing "rewriting config” to file system factory “get" method,
but this method is not usually called by users directly in user facing components, like checkpointing or file sink. The user API is now mainly the file system URI string without any specific config.
I see that making it possible has value but it would require some involving changes in file system dependent APIs or changing the way how file systems are created in general.
You could create a JIRA issue to discuss it.
> On 27 Nov 2018, at 10:06, yinhua.dai <yinhua.2018@xxxxxxxxxxx> wrote:
> It might be difficult as you the task manager and job manager are pre-started
> in a session mode.
> It seems that flink http server will always use the configuration that you
> specified when you start your flink cluster, i.e. start-cluster.sh, I don't
> find a way to override it.
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/