This bug should only occur in 32 bpp screen depth. If users run in any
other depth (8, 15, 16, 24), things should work fine.
This hang is due to an infinite loop in AwtWin32GraphicsDevice::Initialize().
NetSupport does something very funky when it connects to the client; it fakes
out the OS to tell it that there are two monitors. When a Java app runs, we
set up color models for each device/monitor on the system. For this second
NetSupport monitor, we go through the same Initialize() function that we
always do, only we get an error this time when calling GetDIBits().
We assert on this error in debug mode, but otherwise do nothing about it.
In particular, we assume that the values set in this function are valid and
we continue on about our business in that function. The problem is that we
spin in loops around the red/green/blue values that were supposed to be set
in that function, waiting for a certain condition to be true (we shift
the values until they are non-null). But since the function error'd out, the
values are not set, and we could end up spinning in a loop forever
(basically, waiting for a value of "0" to become non-null).
It's not clear why GetDIBits() is failing in this particular situation, although
it seems like a bug in the NetSupport code; we are passing in valid values
to the function and they work on every other platform. But for some
reason, the NetSupport code does not like to be called with the biCompression
field in the BITMAPINFOHEADER structure set to BI_BITFIELDS when the display
depth is 32 bpp. This is a valid value to pass, but it just doesn't work
in this situation.
There are 2 fixes that should be applied to our code to fix this and
- trap the error and do something intelligent about it. We could
and should at least set some default values for the rgb masks and
exit the function cleanly without spinning in these loops.
- Move the 32 bpp case down to the 24 bpp case; let them both use
BI_RGB instead of 32 bpp using BI_BITFIELDS. Both of these are
acceptable values for 32 bpp; the BI_BITFIELDS is a little more
general, but the masks for 32 bpp are pre-ordained anyway, so I don't
know why we need that flexibility in our code.
Of course it would be nice if NetSupport fixed the bug in their app as well,
but that does not fix the whole problem and we cannot count on that
for all of our customers.
I will leave the code basically the way it is (leaving the 32 bpp case
as a BI_BITFIELDS case), and just handle the error in GetDIBits by
hardcoding the results.
By leaving the code basically as is, we run less risk of breaking something
else. For example, the SGI NT machines use a bizarre (or at least uncommon)
framebuffer format of RGBX. If I hardcode 32 bpp bitmasks for colormap
creation, will this cause bad color effects on such framebuffers? By
simply handling the error instead, I will essentially only change the
functionality on cases which, by definition, do not currently work, thus
leaving existing/working platforms (such as the SGI machine) alone.
Note to JDC submitters: This bug is fixed in a release (mantis, or 1.4.2)
that is not yet released. Thus you will not see the fix until you can
download the release that has the fix. There should be a beta available for
1.4.2 soon; when it is available, try that out and your problems should