If casting a pointer to int is so wrong, then why does it feel so right?

4 Responses

  1. That’s why a safe language like Ada make it painful: Unchecked_Conversion(Source => pointer, Target => int)… lots of characters to write.. makes you think twice.

  2. And why are you casting a pointer to int? If you just want to do pointer offset math, convert it to uint8_t* or, better yet, use the high-level abstraction which C already provides you.

    The only legitimate reason I’ve seen for converting pointer to int (aide from trying to byte-address an array which is normally handled as, say 16- or 32-bit, like stuff I run into all the time on the Nintendo DS due to its insane unaligned access penalties) is for debug statement purposes (e.g. printf(“%08x”,myPtr)) but in that case the compiler “handles” the cast for you (insofar as any cast is needed on a … parameter, which it isn’t).

  3. er, like, for that second one I meant casting on pointers *at all*. I’ve had a bit much to drink. a little impromptu “we’ve probably gone gold” soiree at a local pub.

  4. In my case, it’s part of an Operating Systems class project. Basically, the function I need to implement returns an integer used to identify a resource. Since it’s entirely up to me to decide what that integer actually is, right now I’m just making it a pointer to the data structure representing that resource.

    It’s O(1) lookup time, doesn’t impose an arbitrary limit on the number of resources, and doesn’t require any fancy data structures. (Since this is kernel-level code, there aren’t any ready-made containers besides arrays available.) Of course, it means passing a bad integer into the functions that use it will cause a segfault instead of returning an error, which I suppose is a problem. But I can take care of that once I get the basic functionality of the project down and determine whether there will in fact be a hard limit on the number of resources possible (due to other constraints of the system).

Comments are closed.