The problem is that we're clipping in integer coordinates without
taking into account the Bresenham error/step terms.
For instance, take the example in the Description. The original line
was from (0, y) to (10, y+1). When we clipped the line at x=2, we essentially
told d3d to draw a line from (2, y) to (10, y+1). The poitn at which we stepped
down to (y+1) on the first line was at x=5. But the bresenham algorithm for
the second line would have us step down at x=6 (halfway between the start and
The fix here is to take advantage of the fact that d3d uses floating point
coordinates for its lines and clip in float coordinates instead of
integer. So instead of clipping the line above at (2, y), we would clip
it at (2, y+.2), which is essentially where the line would be in sub-pixel coordinates if drawn from the original point of (0, y).
The "clip in float coordinates" worked fine except for one minor detail: the d3d
hardware I tested with (actually both an nVidia card and an ATI radeon)
used such small floating-point precision, that there were many pixelization
errors (clipped lines drawn with different pixels than unclipped). So we had
to take a different approach.
We now enable d3d clipping and set up the viewport appropriately every time
we draw a line (or rectangle). This allows us to draw with the unclipped
coordinates, which means that the d3d hardware does the same setup for
clipped and unclipped lines and thus ends up with the same pixels covered
inside the clipped area.
this last attempt worked fine except ... some cards end up with the same
line-clipping artifacts with viewport clipping that I saw when I tried
integer clipping (which resulted in this bug) and floating-point clipping (which had the problems described above).
Another approach I tried involved using the d3d clip planes, but these appear to be ignored completely for screen-space primitives.
The final result was this:
- handle clipping via viewport clipping, as described above
- beef-up the runtime that that determines whether or not to use d3d to
render lines. now this test includes a clipping test that checks whether
a clipped line is drawn int he same pixel path as the unclipped line.
Specifically, we draw a nearly-horizontal line:
And then redraw it several times, clipping on the left further each time.
We should end up with the same pixels covered after all of these lines, but
we we end up with on some hardware is something more like this:
(note the overlap of pixels in the first row).
- If the test passes d3d lines but fails clipped lines, we simply disable
d3d line clipping on this device and end up handling diagonal clipped lines
through the same mechanism as jdk1.4.1.
- If line clipping passes the test, we enable line clipping via d3d and all
lines are rendered through d3d (barring any other problems that crop up).
The end result of these changes are:
- Performance is similar to jdk1.4.1 on platforms that do not do correct clipping
- Performance is much improved jdk1.4.2 vs. jdk1.4.2) on platforms that
can handle correct clipping
- Quality is the same on both releases (and should be similar to the line
quality of our own software loops).
Note that we should work on an advanced approach to line drawing in the future that should handle line clipping on all d3d hardware, using a mask (either zbuffer or stencil buffer). That fix is beyond the scope of jdk1.4.2, but would be a reasonable performance feature for a future release.