[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Entering a very large number

On Mon, 26 Mar 2018 02:37:44 +0100, bartc wrote:

> Calling a function that sets up C using 'C = 288714...' on one line, and
> that then calculates D=C+C, takes 0.12 seconds to call 1000000 times.
> To do D=C*C, takes 2.2 seconds (I've subtracted the function call
> overhead of 0.25 seconds; there might not be any function call).

Bart, this is a contrived example that really proves nothing. This is 
literally the sort of premature optimization that Knuth etc so frequently 
warn about. In *almost any* real program, you're not going to be 
calculating C+C over and over and over and over again, millions of times 
in a row, while doing *literally* nothing else.

If your point is that, *hypothetically*, there could be some amazingly 
unlikely set of circumstances where that's exactly what I will need to 
do, then fine, I'll deal with those circumstances when it happens and not 
a moment before.

And I'll probably deal with it by calculating C+C in Python, then hard-
coding D= that number, reducing the runtime cost of the addition to zero.

> If I instead initialise C using 'C = int("288712...")', then timings
> increase as follows:

Given that the original number given had 397 digits and has a bit length 
of 1318, I must admit to some curiosity as to how exactly you managed to 
cast it to a C int (32 bits on most platforms).

It is too big for an int, a long (64 bits), a long-long (128 bits) or 
even a long-long-long-long-long-long-long-long-long-long-long-long-long-
long-long-long (1024 bits), if such a thing even exists.

So what exactly did you do?