Where has the practice of sending screen shots as source code come from?
On Sun, Jan 28, 2018 at 5:46 PM, Steven D'Aprano
<steve+comp.lang.python at pearwood.info> wrote:
> On Sun, 28 Jan 2018 17:13:05 -0800, Dan Stromberg wrote:
>> It feel like it'd be possible to train a neural network to translate
>> text in a screenshot to plain text though.
> That would be OCR, which has been around long before neural networks.
> Neither OCR nor neural networks can magically enhance low-res pixellated
> images and *accurately* turn them into text.
Yes, I'm familiar with OCR, but last I heard, it was still requiring
tuning with a specific font.
Is it really true that OCR appeared long before Neural Networks
(NN's)? I first heard of NN's in the 80's, but OCR more like the
90's. NN's have been around a long time, but it's only recently that
they've become highly useful because of the increase in computer
power, an explosion of digital data availability, and algorithm
If an NN can translate from English to German well, and can tell a cat
from a dog, or play go on a level that can beat the best human in the
world, an NN might be able to do the job OCR was intended for given
adequate training data. And we could probably write a little gobject
introspection app to generate copious training data.