osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Pythonic Y2K


Dave,

You have me worried now. Yes, years from now you may need experts who can
handle not just 2.X but specific versions like 2.4. Assuming python keeps
making incompatible changes and is up to version 9.02, you may also have 3.X
experts and 4.X experts and so on.

Of course, by then, some of the experts may be an AI specializing ...

All kidding aside, I have to wonder if future developments may result in new
categories of computer languages that are designed anew in radically
different ways to the point where it may get more people to switch and more
languages to halt further development.

I see Unicode as a potential driver. The number of symbols in languages like
python is fixed by what can be seen on a normal keyboard. That keyboard
needs to change a bit, or some virtual version, to support lots more. When
that happens, we won't be forced to do as much sharing and overloading as we
do now. How many ways is "%" used including within one line of code?

>>> print("%d %s " % (9 % 5, "ways to use %"))
4 ways to use %

Similarly, {} can be used for dictionaries or sets but an empty set
initializes a dictionary only. [] can be used to index lists by number or
dictionaries by key. There are not as many such sets of characters available
and <> is reserved for other uses and parentheses also has an odd role with
tuples as well as keeping things in some order of operations. Imagine adding
a few more matched symbols including some you can define for your own newly
created kinds of data like matrices. Similarly, you could have an
abbreviated way of defining additional operations if you could just use come
common mathematical symbols that are not in ASCII, not to mention some
dingbats.

If a programming language leaped across the ASCII divide (and I am sure some
have, including the languages that used backspace to make overwritten
multi-character operators) I can see ways to make more compact but less
confusing languages. I admit that might confuse some people, especially some
that only really know one language. I am used to multiple languages
including some with rather unique character sets and perhaps may be the only
one willing to use such a language. OK, sort of kidding.

I have seen many forums like this one (and not just about computer
languages) where I encounter true believers that do not welcome any
suggestion that there may be other things out there with some merit or that
their own may change. I welcome change and am interested in different ways
of thinking. This makes it harder for me to quite see the viewpoint that I
associate with stasis. But, to each their own. Perhaps literally.

-----Original Message-----
From: Python-list <python-list-bounces+avigross=verizon.net at python.org> On
Behalf Of DL Neil
Sent: Wednesday, January 16, 2019 11:04 PM
To: Python <python-list at python.org>
Subject: Re: Pythonic Y2K

On 17/01/19 4:45 PM, Larry Martell wrote:
> On Wed, Jan 16, 2019 at 9:35 PM Avi Gross <avigross at verizon.net> wrote:
>>
>> Chris,
>>
>> The comparison to Y2K was not a great one. I am not sure what people 
>> did in advance, but all it took was to set the clock forward on a 
>> test system and look for anomalies. Not everything would be found but it
gave some hints.
> 
> Clearly you did not live through that. I did and I got over 2 years of 
> real work from it. Companies hired me to check their code and find 
> their Y2K exposures. Things like a hard coded '19' being added to a 2 
> digit year. Or code that only allocated 2 bytes for the year. I could 
> go on and on. At one client I had I found over 4,000 places in their 
> code that needed to be modified. And there was no widespread use of 
> VMs that you could easily and quickly spin up for testing. It was a 
> real problem but because of many people like me, it was dealt with.
> Now the next thing to deal with is the Jan. 19, 2038 problem. I'll be
> 80 then, but probably still writing code. Call me if you need me.


Same.

The easy part was finding the hardware, configuring identical systems, and
changing the date-era. Remember that we pre-dated TDD, so we pretty much
re-designed entire testing suites! The most difficult work was with the
oldest systems - for which there was no/little/worthless documentation, and
usually no dev staff with 'memory'.

Then there were the faults in OpSys and systems programs on which we could
supposedly rely - I still have a couple of certificates somewhere, for
diagnosing faults which MSFT had not found... The difficulty of multi-layer
fault-finding is an order of magnitude more difficult than Python debugging
alone!

I'm told there are fewer and fewer COBOL programmers around, and those that
survive can command higher rates as a consequence. Would going 'back' to
that be regarded as "up" skilling?

Does this imply that there might one day be a premium chargeable by Py2.n
coders?

--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list