the implementation of math.sqrt could take advantage of anything specific to **0.5, ,while **0.x would have to be a generalist library. a compiler optimization could be used, but there would be the SLIGHT overhead of figuring out if the optimization could be used. if the optimizer even does it.
a simple test of
#!/usr/bin/python
import random
from math import sqrt
randomlist = []
for i in range(0,1000000):
n = random.randint(1,2**16)
randomlist.append(n)
newlist = [sqrt(x) for x in randomlist]
i see execution time lower than the equivalent of
#!/usr/bin/python
import random
randomlist = []
for i in range(0,1000000):
n = random.randint(1,2**16)
randomlist.append(n)
def sqrt(x):
return x**0.5
newlist = [sqrt(x) for x in randomlist]
not by a huge amount, but around .66s as opposed to .73s or so.
Granted, i'm doing far more work than just doing the sqrt, but remove the sqrt step entirely and execution time are around .63s. So it seems quite a bit faster.
Anyway, it comes down to there being faster tricks to specific sqrt, while a generalist library would at BEST have to make a decision to use a trick and then use it. Since compiling didn't speed anything up, I'm guessing no. There are even hardware functions that i don't believe python uses (?) but might not have the properties the language desires.
Totally unrelated, if you want to see how insane people get with this kind of thing.