OSDir


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Accessing source table data from hive/Presto


Hi Mugunthan,

this depends on the type of your job. Is it a batch or a streaming job?
Some queries could be ported to Flink's SQL API as suggested by the link that Hequn posted. In that case, the query would be executed in Flink.

Other options are to use a JDBC InputFormat or persisting the result to files and ingesting it with a monitoring file sink.
These options would mean to run the query in Hive/Presto and just ingesting the result (via JDBC or files).

It depends on the details, which solution works best for you.

Best, Fabian

2018-08-07 3:28 GMT+02:00 Hequn Cheng <chenghequn@xxxxxxxxx>:
Hi srimugunthan,

I found a related link[1]. Hope it helps.


On Tue, Aug 7, 2018 at 2:35 AM, srimugunthan dhandapani <srimugunthan.dhandapani@gmail.com> wrote:
Hi all,
I read the Flink documentation  and came across the connectors supported

We have some data that  resides in Hive/Presto that needs to be made available to the flink job. The data in the hive or presto can be updated once in a day or less than that.

Ideally we will connect to the hive or presto , run the query and get back the results and use it in a flink job.
What are the options to achieve something like that?

Thanks,
mugunthan