About Me
Michael Zucchi
B.E. (Comp. Sys. Eng.)
also known as Zed
to his mates & enemies!
< notzed at gmail >
< fosstodon.org/@notzed >
And the winner is ...
Well I kept poking somewhat disinterestedly at some JNI code generator ideas. Just feeling pretty blah over-all, maybe it's the weather ... hay-fever, sleep apnoea, insomnia, all a big pita over the last few days.
I started looking into m4, and it could probably create a fairly concise definition and generator: but boy, the escaping rules and special cases are such a prick to learn it just isn't worth the effort. For all the intricacies of learning m4 there's very little pay off - it isn't a very useful tool in general and as I wont be using it all the time i'd just have to learn it all again whenever I looked at it. No thanks.
The cpp stuff is still an idea, but the definitions are clumsier than they need to be; one cannot automate the parameter passing convention inferencing which is often possible.
But the exercise did provide useful for the programme design. I took the way it worked and ported it to perl instead (now perl is a tool which is easy to learn with a massive pay-off; definitely not the one to use for every problem though). Macro processors do more work than they appear to on the surface, but after a bit of mucking about (and some fighting with weird perl inconsistencies with references and so on) I had something going. Well I suppose it's an improvement; it halved the amount of code required to do the same thing so that's gotta be better than a poke in the eye.
Because the jjmpeg binding generator is the oldest it's also the crustiest - so when i care I will look into it i'll probably replace it: it isn't a high priority since what is there does work just fine.
f'ing the puppy
Hmm, I haven't really been keeping up much with nvidia lately - they've obviously never really cared about OpenCL and what little effort they did for show seemed to disappear early 0-11. Oh and gmail suddenly decided they're developer updates were spam ... (they haven't mentioned opencl in any update for over a year).
Still, they make hardware that is capable of computing jobs, and they still seem to want to push their own platform for the job, so one must keep abreast - and if nothing else healthy competition is good for all of us.
I'd seen the odd fanboi post about how 'kepler' was going to wipe the floor of all the competition, and after all this time to see what has been released looks pretty disappointing. It's nice to see that they've tackled the power and heat problems, but this part is all about games, `compute' and OpenCL is left in the dust, behind even their older parts. And even then the games performance whilst generally better isn't always so. It looks like they skimped on the memory width in order to reduce the costs so they could be price competitive with AMD - so presumably a wider-bus version will be out eventually and is sorely needed to re-balance the system design.
I guess they've decided general purpose compute is still a bit before it's time, not ready as a mainstream technology and therefore not worth the investment. Well it's a gamble. Probably just as much a gamble as AMD putting all their eggs into that basket: although AMD have more to win and lose as for them it is all about CPU / system performance than simply graphics. Still, with multi-core cpu's hitting all sorts of performance ceilings AMD (nor intel for that matter) really has no choice and time will tell who has the better strategy. AMD certainly have a totally awesome vision, whether they can pull it off is another matter. Nvidias vision only seems to be about selling game cards - nothing wrong with that as money is what any company is about, but it's hardly inspiring.
I'm not sure how big the market is for high-end graphics-only cards. When reading
fanboi
'enthusiast' sites one has to put in perspective that they are still a niche product, and most graphics cards in use are embedded or the lower-end of the price range. Comments from kids saying they're going to throw away their recently bought top-of-the-range amd/nvidia card now that nvidia/amd have a new one out are just ludicrous and simply not believable (and if they really do such things, their behaviour isn't rational and not worth consideration).
Macro binding
So, late into the night I was experimenting with an alternative approach to a binding generator: using a macro processor.
Idea being: one relatively simple and expressive definition of an interface using macros, then by changing the macro definitions one can generate different outputs.
So for example, for field accessors I ended up with something like this:
class_start(AVCodecContext)
build_gs_int(AVCodecContext, bit_rate, BitRate)
...
class_end(AVCodecContext)
With the right definitions of the macros I then generate C, Java, or wrapper classes by passing this all through the C pre-processor. It's not the most advanced macro processor but it is sufficient for this task.
Then I thought I hit a snag - wont this be totally unwieldy for defining the function calls themselves?
So I looked into an alternative approach, using Java with annotations to define the calling interface instead, and then use reflection to generate the C bindings from that. The problem with this is: it just takes a lot of code. Types to define the type conventions, virtual methods for the generators, so on and so forth. Actually I spent several times the amount of time I spent on the field generator before I had something working (not terribly familiar with the reflection api required).
The definitions are relatively succinct, but only for function calls: for field accessors one ends up writing each get/set function as well. Still not the end of the world and it might be a reasonable approach for some cases.
e.g.
@JGS(cname = "frame_size")
static native int getFrameSize(AVCodecContext p);
@JGS(cname = "frame_size")
static native void setFrameSize(AVCodecContext p, int v);
@JNI(cname = "avcodec_open")
static native int open(AVCodecContext p, AVCodec c);
The annotations are relatively obvious.
Then I had another thought this morning: dynamic cpp macro invocation. I forgot you can create a macro call based on macro arguments. All I would need to then define is a new macro for each number of possible arguments, and other macros could handle the various conventions used to marshal that particular argument. And even then it would mostly be simple cut and paste code once a single-argument macro was defined.
So after half an hour of hacking and 50 lines of code (both java and perl were about 400 to do this, with perl being much more dense), I came up with some macros that generate the JNI C code once something like this is defined:
call2(AVCodecContext, int, int, avcodec_open, open,
object, AVCodecContext, ctx, object, AVCodec, c)
The arguments are: java/c type, return 'convention', return type, c function call name, java method name, argument 1 convention, type, name, argument 2 convention, type, name. If I removed the argument names - which could be defined in the macro - it would be even simpler.
I 'just' need to define a set of macros for each convention type - then I realised I'd just wrote some dynamic object-oriented c-pre-processor ... hah.
For a binding like FFmpeg where there are few conventions and it benefits from grouping and 'consistifying' names, i.e. a lot of manual work - this may be a practical approach. And even when there are consistent conventions, it may be a practical intermediate step: the first generator then has less to worry about and all it has to do is either infer or be told the specific conventions of a given argument.
Of course, using a more advanced macro processor may help here, although i'm not as familiar with them as cpp (and more advanced means harder to learn). So that got me thinking of a programming exercise: try to solve this problem with different mechanisms and just see which turns out the easiest. XSLT anyone? Hmm, maybe not.
I'd started to realise the way I'd done it in jjmpeg's generator wasn't very good: i'm trying to infer the passing convention and coding it in the generator using special cases: rather than working it out at the start and parametrising the generator using code fragment templates. The stuff I was playing with with the openal binding was more along these lines but still has some baggage.
openal weirdness
Hmm, so after my previous post I later noticed a strange detail on the alcGetProcAddress and alGetProcAddress functions (BTW the specification, unlike the opencl one, isn't very concise or easy to read at all): they may return per-device or per-context addresses(!?).
So rather than the routing to the correct driver that occurs transparently using OpenCL, the application must do it manually by looking up function pointers on the current context or on a given device ... although it isn't clear when which is exactly needed as some extension docs mention it, and others do not (although some function args are suggestive of which way the functions go). Further complicating matters is that the context may be set per-thread if the ALC_EXT_thread_local_context extension is available. A search on the openal archives nets a conclusion that it's a bit confusing for everyone.
Obviously java can't be looking up function pointers explicitly, so what is a neat solution?
- When using openal-soft (i.e. any free operating system), just ignore it: that's all it does. i.e. just use the same static symbol resolution mechanism as for the core functions.
- Look up the function every time - openal-soft uses a linear scan to resolve function names, so this probably isn't a good solution.
- I thought of trying to use a function table which can be reset when the context changes and so on, but the thread_local_context extension makes this messy, and it's just messy anyway.
- Move all extension functions to a per-extension wrapper class that loads the right pointers on instantiation. Have factory methods only available on a valid context or device depending on the extension type and associate the wrapper directly with that.
4 is probably the cleanest solution ...
More JNI'ing about
Well it looks like I brought my 'hacking day' a few days early, and I ended up poking around with JNI most of the day ... Just one of those annoying things that once it was in my head I couldn't get it out (and the stuff I have to do for work isn't inspiring me at the moment - gui re-arrangements/database design and so on).
I took the stuff i discovered yesterday, and tweaked the openal binding I hacked up a week ago. I converted all array arguments + length into java array types, and all factory methods create the object directly.
Then I ran a very crappy timing test ... most of the time in this is spend in alcOpenDevice(), but audio will have a fairly low call requirement anyway. I could spend forever trying to get a properly application-represenative benchmark, but this will show something.
First, the C version.
#define buflen 5
static ALuint buffers[buflen];
void testC() {
ALCdevice *dev;
ALCcontext *ctx;
int i;
dev = alcOpenDevice(NULL);
ctx = alcCreateContext(dev, NULL);
alcMakeContextCurrent(ctx);
alGenSources(buflen, buffers);
for (i=0;i<buflen;i++) {
alSourcei(buffers[i], AL_SOURCE_RELATIVE, 1);
}
for (i=0;i<buflen;i++) {
alSourcei(buffers[i], AL_SOURCE_RELATIVE, 0);
}
alDeleteSources(buflen, buffers);
alcMakeContextCurrent(NULL);
alcDestroyContext(ctx);
alcCloseDevice(dev);
}
Not much to say about it.
Then the JOAL version.
AL al;
ALC alc;
IntBuffer buffers;
int bufcount = 5;
void testJOAL() {
ALCdevice dev;
ALCcontext ctx;
dev = alc.alcOpenDevice(null);
ctx = alc.alcCreateContext(dev, null);
alc.alcMakeContextCurrent(ctx);
al.alGenSources(bufcount, buffers);
for (int i = 0; i < bufcount; i++) {
al.alSourcei(buffers.get(i), AL.AL_SOURCE_RELATIVE, AL.AL_TRUE);
}
for (int i = 0; i < bufcount; i++) {
al.alSourcei(buffers.get(i), AL.AL_SOURCE_RELATIVE, AL.AL_FALSE);
}
al.alDeleteSources(bufcount, buffers);
alc.alcMakeContextCurrent(null);
alc.alcDestroyContext(ctx);
alc.alcCloseDevice(dev);
}
...
buffers = com.jogamp.common.nio.Buffers.newDirectIntBuffer(bufcount);
al = ALFactory.getAL();
alc = ALFactory.getALC();
...
I timed the initial factory calls which loads the library: this isn't timed in the C version. Note that because it's passing around handles you need to go through an interface, and not directly through the native methods.
And then this new version, lets call it 'sal'.
import static au.notzed.al.ALC.*; import
static au.notzed.al.AL.*;
int bufcount;
int[] buffers;
void testSAL() {
ALCdevice dev;
ALCcontext ctx;
dev = alcOpenDevice(null);
ctx = alcCreateContext(dev, null);
alcMakeContextCurrent(ctx);
alGenSources(buffers);
for (int i = 0; i < buffers.length; i++) {
alSourcei(buffers[i], AL_SOURCE_RELATIVE, AL_TRUE);
}
for (int i = 0; i < buffers.length; i++) {
alSourcei(buffers[i], AL_SOURCE_RELATIVE, AL_FALSE);
}
alDeleteSources(buffers);
alcMakeContextCurrent(null);
alcDestroyContext(ctx);
alcCloseDevice(dev);
}
...
buffers = new int[bufcount];
...
Again the library load and symbol resolution is included - in this case it happens implicitly. Notice that when using the static import it's almost identical to the C version. Only here the array length isn't required as it's determined by the array itself.
Also this implementation only needs a small number of very trivial classes to be hand-coded, and everything else is done in the C code; although I also looked into wrapping the whole lot (buffers and sources included) in a high-level api as well. The openal headers are used completely untouched, although I have some mucky scripts which call gcc/cproto/grep to extract the information I need.
Apart from the code itself, I tried two array binding approaches, one which uses GetIntArrayRange(), and the other that uses Get/ReleaseCriticalArray(). Note that for the case of alSourcei() the binding JNI code only needs to read the array and it doesn't copy it back afterwards.
Timings
The results, the above routine is run 1000 times for each run. The runs are a loop within the process, so only the first time has any library load overheads. I used the oracle jdk 1.7.
run c joal range critical
0 3.3 3.678 3.518 3.554
1 3.267 3.622 3.446 3.405
2 3.297 3.513 3.493 3.552
3 3.264 3.482 3.448 3.494
4 3.243 3.575 3.553 3.542
5 3.297 3.472 3.395 3.352
6 3.308 3.527 3.376 3.359
7 3.284 3.52 3.354 3.363
8 3.253 3.419 3.363 3.349
9 3.266 3.42 3.429 3.413
ave 3.2779 3.5228 3.4375 3.4383
min 3.308 3.678 3.553 3.554
max 3.243 3.419 3.354 3.349
As you'd expect, C wins out - it just calls all the functions directly, and even the temporary storage is allocated in the BSS.
Both of the 'sal' versions are much the same, and joal isn't too far behind either - but it is behind.
Ok, so it's not a very good benchmark. I'm not going to re-write all the above, but when I changed to 100 iterations, but repeated the inner 2 loops (between gen/deleteSources) 100 times as well: c was about 0.9s, joal averaged about 4s, and sal averaged 2s. But that case probably goes too far in over-representing the method call overheads relative to what you might expect for an application at run-time - JNI has overheads, but as long as you're not implementing java.lang.Math() with it, it's barely going to be measurable once you add in i/o overheads, mmu costs, system scheduling and even cache misses.
At any rate, it validates the approach taken against another long-standing implementation (if not a particularly heavily developed one). Assuming that is I don't have glaring errors in the code and it's not actually doing all the work I ask of it.
SAL binding
Note also that the 'sal' binding hasn't skimped on safety where possible just to try to get a more favourable result (well, the al*v() methods have an implied data length which I am not checking ...), e.g. the symbols are looked up at run-time and suitable exceptions thrown on error.
An example bound method:
// manually written glue code
int checkfunc(JNIEnv *env, void *ptr, const char *name) {
// returns true if *ptr != null
// opens library if not opened
// sets exception and returns false if it can't open
// looks up method and field id's if not already done
// sets *ptr to dlsym() lookup
// sets exception and returns false if it can't find it
// returns true
}
// auto-generated binding
jobject Java_au_notzed_al_ALCAbstract_alcCreateContext(
JNIEnv *env, jclass jc, jobject jdevice, jintArray jattrlist) {
static LPALCCREATECONTEXT dalcCreateContext;
if (!dalcCreateContext
&& !checkfunc(env, (void **)&dalcCreateContext, "alcCreateContext")) {
return (jobject)0;
}
ALCdevice * device = (jdevice ?
(void *)(*env)->GetLongField(env, jdevice, ALCdevice_p) : NULL);
jsize attrlist_len = jattrlist ?
(*env)->GetArrayLength(env, jattrlist) : 0;
ALCint * attrlist = jattrlist ?
alloca(attrlist_len * sizeof(attrlist[0])) : NULL;
if (jattrlist)
(*env)->GetIntArrayRegion(env, jattrlist, 0, attrlist_len, attrlist);
ALCcontext * res;
res = (*dalcCreateContext)(device, attrlist);
jobject jres = res ?
(*env)->NewObject(env, ALCcontext_jc, ALCcontext_init_p, (long)res) : NULL;
return jres;
}
// auto-generated java side
public class ALCAbstract extends ALNative implements ALCConstants {
...
public static native ALCcontext alcCreateContext
(ALCdevice device, int[] attrlist);
...
}
// manually written classes (could obviously be auto-generated,
// but this isn't worth it if i want to add object api here)
public class ALCcontext extends ALObject {
long p;
ALCcontext(long p) {
this.p = p;
}
}
// and another
public class ALCdevice extends ALObject {
long p;
ALCdevice(long p) {
this.p = p;
}
}
So, 32-bit cpu's amongst you will notice that the handle is a long ... but that's only because I haven't bothered worrying about creating a version for 32-bit machines. Actually, because the JNI code is the only one which creates, accesses, or uses 'p' directly, it's actually easier to do this than if I was passing the handle to all of the native methods.
i.e. all I need is a different ALCdevice concrete implementation for each pointer size, and have the C code instantiate each instance itself. Neither the java native method declarations nor any java-side code needs to know the difference. If I wanted a high level ALCdevice object, that could just be abstract and it also needn't know about the type of 'p'.
Other stuff
So one thing i've noticed when doing these binding generators is that every library does things a bit differently.
The earlier versionf of FFmpeg were pretty clean, and although the function names were all over the place most of the calls took simple arguments with obvious wrappings. It's also a huge api ... which one
would not want to have to hand-code.
For openal, it requires passing a lot of arrays+length around, so to implement an array based interface requires special-case code to remove the length specifiers out of the api (of course, it may be that one actually wants these: e.g. to use less than the full array content, or for indexing within an array, but these cases I have intentionally ignored at this point). The api is fairly small too and changes slower than a wet week! It also has a separate symbol resolution function for extensions - which I haven't implemented yet.
I also looked at OpenCL - and the binding for that requires special-case handling for the event arrays for it to work practically from Java. It is also more of an object based api rather than a 'name' (i.e. an int reference id) based one.
(BTW I'm only experimenting with these apis because i've been looking at them recently and they provide examples of reasonably small and well-defined publc interfaces. I am DEFINITELY NOT planning on a whole jogamp replacement here: e.g. opencl and openal are simple stand-alone interfaces that can work mostly independently of the rest of the system. OpenGL OTOH has a lot of weird system dependencies which are a pain to work with - xgl, wgl, etc. - before you even start to talk about toolkit integration).
So ... what was my point here. I think it's that trying to create a be-all-and-end-all binding generator is too much work for little pay off. Each library will need special casing - and if one tries to include every possible api convention (many of which cannot be determined by examining the header files - e.g. reference counting or passing, null terminated arrays, arrays vs pass by reference, etc! etc! etc!) the cost becomes overwhelming.
For an interface like openal - which is fairly small, mostly repetitive, and changes very slowly, all the time spent on getting a code generator to work would probably be better spent on doing it by hand: with a few judiciously designed macros it would probably only be half to a days work once you nutted out the mechanisms. Although once you have a generator working it's easier to experiment with those mechanisms. In either case once it's done it's pretty much done too - openal isn't going to be changing very fast.
Although a generator just seems like 'the right way to do it(tm)' - you know, the interface is already described in a machine-readable form, so why not use it? But a 680 line bit of write-only perl is probably going to be more work to maintain than the 1400 lines of simple, much repeating and never changing C that it generates.
JNI overheads
Update: Also see the next post which has a slightly more real-world example.
So, i've been poking around with some JNI binding stuff and this time I was experimenting with different types of interfaces: I was thinking of doing more work on the C side of things so I can just directly use the native interfaces rather than having to go through wrappers all the time.
So I was curious to see how much overhead a single field access would be.
I'll go straight to the results.
This is the type of stuff I'm testing:
class HObject {
long handle;
public void call1() { invokes static native void call(this.handle); }
public void call2() { invokes native void call(this.handle); }
native public void call3();
}
class BObject {
ByteBuffer handle;
public void call1() { invokes static native void call(this.handle); }
}
All each native function does is resolve the 'handle' to the desired pointer, and assign it to a static location. e.g. for a ByteBuffer it must call JNIEnv.GetDirectBufferAddress(), for long it can just use the parameter directly, and for an object reference is must call back into the JVM to get the field before doing either of those.The timings ... are for
10^6
10x10^6 invocations, spread over 10050 objects (some attempt to avoid unrealistic optimisations), repeated 5 times: the last result is shown.
What \ time
0 static native void call(HObject o) 0.15
1 HObject.call1() 0.10
2 static native void call(long handle) 0.10
3 static native void call(HObject.handle) 0.10
4 HObject.call2() 0.12
5 HObject.call3() 0.14
6 static native void call (BObject o) 0.59 (!)
7 BObject.call1() 0.36 (!)
The timings varied a bit so I just showed them to 2 significant figures.
Whilst case 2 isn't useful, cases 1 and 3 show that hotspot has effectively optimised out the field dereferences after a few runs. Obviously this is the fastest way to do it. Although it's interesting that the static native method (1) is measurably different to the object native method (4).
The last case is basically how I implemented most of the bindings i've used so far, so I guess I should have a re-think here. There are historical reasons I implemented jjmpeg this way - I was going to write java-side struct accessors. But since I dropped that idea as impractical, it may make sense for a rethink here. PDFZ does have some java-side struct builders, but these are only for simple arrays which are already typed appropriately.
I didn't test the more complex double-level used in jjmpeg which allows it's native objects to be garbage collected more easily.
Conclusions
So I was thinking I could implement code using case 0 or 5: this way the native calls can just be used directly without any extra glue-code.
There are overheads compared to cases 1 and 4, but it's less than 50%, and relatively speaking it will be far less than that. And most of this is due to the fact that hotspot can remove the field access entirely (which is of course: very cool).
Although it is interesting that a static native call from a local method is faster than a local native call from a local method. Whereas a static native call with an object parameter is slower than a local native call with an object parameter.
Although
10^6
10x10^6 calls are a lot of calls, so the absolute overhead is pretty insignificant even for the worst-case. Even if it's 5x slower, it's still only 59 vs 10 ns per call.
Small Arrays
This has me curious now: I wonder what the overhead for small arrays are, versus using a ByteBuffer/IntBuffer, etc.
I ran some tests with ByteBuffer/IntBuffer vs int[], using Get/ReleaseArrayElements vs using alloca(), and Get(Set)ArrayRegion. The test passes from 0 to 60 integers to the C code, which just adds up the elements.
Types used:
IntBuffer ib = ByteBuffer.allocateDirect(nels * 4)
.order(ByteOrder.nativeOrder()).asIntBuffer;
ByteBuffer bb = ByteBuffer.allocateDirect(nels * 4)
.order(ByteOrder.nativeOrder());
int[] ia = new int[nels];
Interestingly:
- Using GetArrayElements() + ReleaseArrayElements() is basically the same as GetArrayRegion/SetArrayRegion up until there are 32 array elements, beyond that the second is faster. Which is most counter-intuitive.
- I thought that using a ByteBuffer is slower than using an IntBuffer (which is derived from a ByteBuffer using .asIntBuffer()), but it turns out that GetDirectBufferCapacity returns the elements of the buffer size, not the number of bytes (i.e. as the java is documented, but different to the JNI method docs I found). Actually a ByteBuffer is a tiny bit faster.
- If one is only reading the data, then calling GetArrayRegion to a copy on the stack is always faster than anything else for these sizes.
- For read/write the direct byte buffer is the fastest.
But this was just using the same array. What about if i update the content each time? Here I am using the same object, but setting it's content before invocation.
- Until 16 elements, the order is IntBuffer, Get/SetIntArrayRegion, Get/ReleaseIntArray, ByteBuffer
- 16-24 elements, Get/SetIntArrayRegion, IntBuffer, Get/ReleaseIntArray, ByteBuffer
- Beyond that, Get/SetIntArrayRegion, Get/ReleaseIntArray, IntBuffer, ByteBuffer
Obviously the ByteBuffer suffers from calling setInt(), but all the values are within 30% of each other so it's much of a muchness really.
And finally, what if the object is created every time as well?
- Here, any of the direct buffer methods fall down - several times slower than the arrays - 6-10x slower than the fastest array version.
- Here, using Get/SetIntArrayRegion is much faster than anything else, it was consistently at least twice as fast as the Get/ReleaseIntArray version.
So this contains few few curious results.
Firstly (well perhaps not so curious), only if you know the direct Buffer has been allocated beforehand is it always going to win. Dynamic allocation will be a killer; a cache might even it up, but i'm doubtful it would put any Buffer back to a winning spot.
Secondly - again not so curious: small array allocation is pretty damn fast. The timings hint that these small loops might be optimising away the allocation completely which cannot be done for the direct buffers.
And finally the strangest result; copying the whole array to the stack is usually faster than trying to access it directly. Possibly the latter case is either having to take the memory from the heap first and is effectively just doing the same thing. Or it needs to lock the region or perform other GC-related things which slows it down.
Small arrays aren't the only thing needed for a JNI binding, but they do come up often enough. Now I know they're just fine to use, I will look at using them more: they will be easier to use on the Java side too.
Update: So I realised I'd forgotten Get/ReleasePrimitiveArrayCritical: for the final test cases, this was always a bit faster even than Get/SetArrayRegion. I don't know if it has other detrimental effects in an MT application though.
However, it does seem to work fine for very large arrays too, so it might be the simple one-stop shop, as at least on Oracle's JVM it always seems to be the fastest solution.
I tried some runs of 1024 and 10240 elements, and oddly enough the same relative results hold in all cases. Direct buffers only win when pre-allocated, GetIntArrayRegion is equal/faster to GetIntArrayElements, and GetCriticalArray is the fastest.
Filter Graph
This morning I threw together an implementation of some of the ideas I mentioned in the last few posts for a filter/processing graph and checked them into socles.
I had a pretty good result applying the same ideas to my work stuff and I want to eventually use them elsewhere and experiment as well. The main thing I'm trying to resolve is the memory management and scheduling issues. Well I guess the other thing which is also pretty `main' is that by having a component framework it is easier to encapsulate functionality into re-usable connectible parts: i.e. less code, more functionality.
At the moment I have a dummy graph executor (it executes jobs in the right order, but not with any event management), and I still need to play with the queuing. But simple examples work because it's always using the same queue, and the climage-to-java-image conversion is necessarily synchronous. If I get it all working right, separating the graph execution from the nodes will allow me to try out different ideas - e.g. multiple queues, even multiple devices(?) without having to change the underlying code.
Actually one of the biggest hassles was attempting to support Java2D images as much as possible. I've tried to ensure images never lose any bits, although they may need format conversion (e.g. there are no 3-channelx8 bit image formats). I haven't tested all the formats as yet so quite likely the channel orderings are broken for the colour images (it's usually easier to just run it and tweak it till it works rather than try to grok all the endian-ness issues and ordering nomenclature of every library involved). I've got a feeling it was a bit of overkill - Greyscale/RGBA, UNORM_INT8/FLOAT are probably the only formats one really needs to handle at this level. Given that there are efficiency issues, it may be that I throw some of the code away so they can be addressed with less work.
Anyway, a simple example shows that even at this stage it can be used to create useful code: SimpleFilter.java. (at this stage it's not doing resource management, but most of the skeleton is there to handle it).
Eventually I want to be able to do similar things with video sequences (via jjmpeg).
(To what end? Who knows?)
Processing Graphs 3
So based on the stuff in my previous post I decided to ditch the stuff I had already done for work and start with the stuff I worked on in socles on my hack-day as a baseline.
I had to fill out the event management stuff, and then I had to port the few processing nodes I had already ported to my other half-arsed attempt - which apart from a couple of cases was fairly straightforward. I kept the getInputX/getOutputX stuff, implemented some image converters, and wrote a video frame source node (which turned out a bit more involved than I wanted).
Some of the code is a bit messier, and now i've implemented the event stuff I had to go back and fill that out: which was a bit tedious. The 'source' interfaces are a bit of extra mess too but aren't any worse than implementing bound properties and in some cases I can sub-class the implementation.
The graph executor is nice and simple, it visits all nodes and runs every one once in the right order based on the graph. Each execution produces a CLEvent, and takes a CLEventList of the all events from pre-requisite incoming nodes. Each node takes a queue, a CLEventList of conditions, and a CLEventList to which to add it's (one-and-always-one) event output. It wont handle all cases but it will do what I need.
Because input/outputs use simple component-ised syntax, I can auto-connect some of the sources as well using some simple reflective code: this is only run once per graph so isn't expensive.
Possibly I want to allow individual 'source' instances to be able to feed off their own valid-data events too: although I think that's just adding too much complication. If nodes produce partial results the nodes themselves can always be split.
So after all of that (it's been a long day for unrelated reasons), I think it's an improvement ... at least the individual nodes are more re-usable, although some of the glue code to make the stuff I have work is a bit shit. I still have a bunch of cpu-synchronous code because I can't tie in user events with the api I have (I need to update my jocl for that), but with the api i've chosen I should be able to retro-fit that, or other possible design changes later on (e.g. multiple devices? multiple queues?) without needing to change the node code.
Actually I wish I had had more time to play with it in socles before I went with it, because there are always things you don't notice until you try it out. And things are a lot simpler and easier to test in the 'WebcamFX' demo than in a large application. This is also just a side-issue which is keeping me from the main game, so I want to get clear of it ASAP, whereas in socles it's just another idea to play with.
Copyright (C) 2019 Michael Zucchi, All Rights Reserved.
Powered by gcc & me!