Skip to content Skip to sidebar Skip to footer

Why Does Matplotlib Choose The Wrong Range In Y Using Log Scale?

Using matplotlib version 1.5.1 and python 2.7.11 I noticed that I need to specify the limits in y manually or else only the largest y-value point is plotted. Arrays behave the same

Solution 1:

The problem arises because you have first drawn the scatter plot and then set the scales as logarithmic which results in a zooming in effect. This removes the problem:

plt.xscale('log')
plt.yscale('log')
plt.scatter(X, Y)

This produces the intended result. (2nd subplot in your question.)

Solution 2:

It seems like matplotlib is creating the y-axis ticks before converting to a log scale, and then not recreating the ticks based on the change. The y-axis on your first subplot starts at 10e1, not 10e-3. So change the scales before you plot.

plt.xscale('log')
plt.yscale('log')
plt.scatter(X, Y)

I think if you plot the original scale beside the log scale, you might be able to figure out the answer to the partial treatment of the axes by matplotlib. In a log scale, there is no true 0 -- because log(0) is undefined. So the coordinate has to start somewhere above 0, and that causes the problems. Your x axis ranges from 0 to 3, but y from 0 to 16. When converted to log, matplotlib correctly scales x axis, but since y has a factor of 10, it misses the scaling.

Post a Comment for "Why Does Matplotlib Choose The Wrong Range In Y Using Log Scale?"