(Removed dev@ from the mail thread)
I took a look at the logs you provided, and it seems like the sink operators should have been properly tear-down, and therefore closing the RestHighLevelClient used internally.
I’m at this point not really sure what else could have caused this besides a bug with the Elasticsearch client itself not cleaning up properly.
Have you tried turning on debug level for logging to see if there is anything suspicious?
On 13 December 2018 at 7:35:33 PM, Vijay Bhaskar (bhaskar.ebay77@xxxxxxxxx) wrote:
We are using flink cluster 1.6.1, elastic search connector
Attached the stack trace.
Following are the max open file descriptor limit of theTask
manager process and open connections to the
#lsof -p 62041 | wc -l
All the connections
to elastic cluster reached to:
netstat -aln | grep
9200 | wc -l
Besides the information that Chesnay requested, could you also
provide a stack trace of the exception that caused the job to
terminate in the first place?
The Elasticsearch sink does indeed close the internally used
Elasticsearch client, which should in turn properly release all
I would like to double check whether or not the case here is that
that part of the code was never reached.
On 13 December
2018 at 5:59:34 PM, Chesnay Schepler (chesnay@xxxxxxxxxx)
Specifically which connector are you using, and which
On 12.12.2018 13:31, Vijay Bhaskar wrote:
> We are using flink elastic sink which streams at the rate of
> events/sec, as described in
> We are observing connection leak of elastic connections. After
> minutes all the open connections are exceeding the process
> the max open descriptors and Job is getting terminated. But
> connections with the elastic search server remain open
forever. Am i
> missing any specific configuration setting to close the
> connection, after serving the request?
> But there is no such setting is described in the above
> of elastic sink