About Me

Michael Zucchi

 B.E. (Comp. Sys. Eng.)

  also known as Zed
  to his mates & enemies!

notzed at gmail >
fosstodon.org/@notzed >

Tags

android (44)
beagle (63)
biographical (104)
blogz (9)
business (1)
code (77)
compilerz (1)
cooking (31)
dez (7)
dusk (31)
esp32 (4)
extensionz (1)
ffts (3)
forth (3)
free software (4)
games (32)
gloat (2)
globalisation (1)
gnu (4)
graphics (16)
gsoc (4)
hacking (459)
haiku (2)
horticulture (10)
house (23)
hsa (6)
humour (7)
imagez (28)
java (231)
java ee (3)
javafx (49)
jjmpeg (81)
junk (3)
kobo (15)
libeze (7)
linux (5)
mediaz (27)
ml (15)
nativez (10)
opencl (120)
os (17)
panamaz (5)
parallella (97)
pdfz (8)
philosophy (26)
picfx (2)
players (1)
playerz (2)
politics (7)
ps3 (12)
puppybits (17)
rants (137)
readerz (8)
rez (1)
socles (36)
termz (3)
videoz (6)
vulkan (3)
wanki (3)
workshop (3)
zcl (4)
zedzone (26)
Tuesday, 25 February 2014, 00:13

more on JNI overheads

I wrote most of the OpenCL binding yesterday but now i'm mucking about with simplifying it.

I've experimented with a couple of binding mechanisms but they have various drawbacks. They all work in basically the same way in that there is an abstract base class of each type then a concrete platform-specific implementation that defines the pointer holder.

The difference is how the jni C code gets hold of that pointer:

Passed directly
The abstract base class defines all the methods, which are implemented in the concrete class, which just invokes the native methods. The native methods may be static or non-static.

This requires a lot of boilerplate in the java code, but the C code can just use a simple cast to access the CL resources.

C code performs a field lookup
The base class can define the methods directly as native. The concrete class primarily is just a holder for the pointer value.

This requires only minimal boiler-plate but the resources must be looked up via a field reference. The field reference is dependent on the type though.

C code performs a virtual method invocation.
The base class can define the methods directly as native. The concrete class primarily is just a holder for the pointer value.

This requires only minimal boiler-plate but the resources must be looked up via a method invocation. But here the field reference is independent on the type.

The last is kind of the nicest - in the C code it's the same amount of effort (coding wise) as the second but allows for some polymorphism. The first is the least attractive as it requires a lot of boilerplate - 3 simple functions rather than just one empty one.

But, a big chunk of the OpenCL API is dealing with mundane things like 'get*Info()' lookups and to simplify it's use I came up with a number of type-specific calls. However rather than write these for every possible type I pass a type-id to the JNI code so a single function works. This works fine except that I would like to have a separate CLBuffer and CLImage object - and in this case the second implementation falls down.

To gain more information on the trade-off involved I did some timing on a basic function:

  public CLDevice[] getDevices(long type) throws CLException;

This invokes clGetDeviceIDs twice (first to get the list size) and then returns an array of instantiated wrappers for the pointers. I invoked this 10M times for various binding mechanisms.

Method                   Time
 pass long                13.777s
 pass long static         14.212s
 field lookup             14.060s
 method lookup            16.252s

So interesting points here. First is that static method invocations appear to be slower than non-static even when the pointer isn't being used. This is somewhat surprising as 'static' methods seem to be quite popular as a mechanism for JNI binding.

Second is that a field lookup from C isn't that much cost compared to a field lookup in Java.

Lastly, as expected the method lookup is more expensive and if one considers that the task does somewhat more than the pointer resolution then it is quite significantly more expensive. So much so that it probably isn't the ideal solution.

So ... it looks like I may end up going with the same solution I've used before. That is, just use the simple field lookup from C. Although it's slightly slower than the first mechanism it is just a lot less work for me without a code generator and produces much smaller classes either way. I'll just have to work out a way to implement the polymorphic getInfo methods some other way: using IsInstanceOf() or just using CLMemory for all memory types. In general performance is not an issue here anyway.

I suppose to do it properly I would need to profile the same stuff on 32-bit platforms and/or android as well. But right now i don't particularly care and don't have any capable hardware anyway (apart from the parallella). I wasn't even bothering to implement the 32-bit backend so far anyway.

Examples

This is just more detail on how the bindings work. In each case objects are instantiated from the C code - so the java doesn't need to know anything about the platform (and is thus, automatically platform agnostic).

First is passing the pointer directly. Drawback is all the bulky boilerplate - it looks less severe here as there is only a single method.

public abstract class CLPlatform extends CLObject {

    abstract public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;
        }

        public CLDevice[] getDevices(long type) throws CLException {
            return getDevices(p, type);
        }

        native CLDevice[] getDevices(long p, long type) throws CLException;
    }

    class CLPlatform32 extends CLPlatform {
        final int p;

        CLPlatform32(int p) {
            this.p = p;
        }

        public CLDevice[] getDevices(long type) throws CLException {
            return getDevices(p, type);
        }

        native CLDevice[] getDevices(int p, long type) throws CLException;
    }
}

Then having the C lookup the field. Drawback is each concrete class must be handled separately.

public abstract class CLPlatform extends CLObject {
    native public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;
        }
    }

    class CLPlatform64 extends CLPlatform {
        final long p;

        CLPlatform64(long p) {
            this.p = p;
        }
    }

    class CLPlatform32 extends CLPlatform {
        final int p;

        CLPlatform32(long p) {
            this.p = p;
        }
    }
}

And lastly having a pointer retrieval method. This has lots of nice coding benefits ... but too much in the way of overheads.

public abstract class CLPlatform extends CLObject {
    native public CLDevice[] getDevices(long type) throws CLException;

    class CLPlatform64 extends CLPlatform implements CLNative64 {
        final long p;

        CLPlatform64(long p) {
            this.p = p;
        }

        long getPointer() {
            return p;
        }
    }

    class CLPlatform64 extends CLPlatform implements CLNative32 {
        final int p;

        CLPlatform64(int p) {
            this.p = p;
        }

        int getPointer() {
            return p;
        }
    }
}

Or ... I could of course just use a long for storage on 32-bit platforms and be done with it - the extra memory overhead is pretty much insignificant in the grand scheme of things. It might require some extra work on the C side when dealing with a couple of the interfaces but it is pretty minor.

With that mechanism the worst-case becomes:

public abstract class CLPlatform extends CLObject {
    final long p;

    CLPlatform(long p) {
        this.p = p;
    }

    public CLDevice[] getDevices(long type) throws CLException {
        return getDevices(p, type);
    }

    native CLDevice[] getDevices(long p, long type) throws CLException;
}

Actually I can move 'p' to the base class then which simplifies any polymorphism too.

I still like the second approach somewhat for a hand-coded binding since it keeps the type information and allows all the details to be hidden in the C code where it is easier to hide using macros and so on. And the java becomes very simple:

public abstract class CLPlatform extends CLObject {
    CLPlatform(long p) {
        super(p);
    }

    public native CLDevice[] getDevices(long type) throws CLException;
}

CLEventList

Another problematic part of the OpenCL api is cl_event. It's actually a bit of a pain to work with even in C but the idea doesn't really map well to java at all.

I think I came up with a workable solution that hides all the details without too much overheads. My initial solution is to have a growable list of items (the same as JOCL) that was managed on the Java side. It's a bit messy on the C side but really messy on the Java side:

public class CLEventList {
   static class CLEventList64 {
      int index;
      long[] events;
   }
}

...
   enqueueSomething(..., CLEventList wait, CLEventList event) {
       CLEventList64 wait64 = (CLEventList64)wait;
       CLEventList64 event64 = (CLEventList64)event;

       enqueueSomething(...,
           wait64 == null ? 0 : wait64.index, wait64 == null ? null : wait64.events,
           event64 == null ? 0 : event64.index, event64 == null ? null : event64.events);

       if (event64 != null) {
           event64.index+=1;
       }
   }

Yeah, maybe not - for the 20 odd enqueue functions in the API.

So I moved most of the logic to the C code - actually the logic isn't really any different on the C side it just has to do a couple of field lookups rather than take arguments, and I added a method to record the output event.

public class CLEventList {
   static class CLEventList64 {
      int index;
      long[] events;
      void addEvent(long e) {
        events[index++];
      }
   }
}

...
   enqueueSomething(..., CLEventList wait, CLEventList event) {
       enqueueSomething(...,
           wait,
           event);
   }

UserEvents are still a bit of a pain to fit in with this but I think I can work those out. The difficulty is with the reference counting.

Tagged code, hacking, java, opencl.
Sunday, 23 February 2014, 09:39

Distraction

As a 'distraction' last night I started coding up a custom OpenCL binding for Java. This was after sitting / staring at my pc for a few hours wondering if i'd simply given up and 'lost the knack'. Maybe I still have. Actually in hindsight i'm not sure why i'm doing it other than as some relatively 'simple' distraction to keep me busy. It's quite simple because it's mostly a lot of boilerplate mapping the relatively concise OpenCL api to a small number of classes and there isn't too much to think about.

Not sure i'll finish it actually. Like I said, distraction.

But FWIW I took a different approach to the binding this time - all custom code, trying to use / support native java types where possible (rather than forcing ByteBuffer for every interaction), etc. Also a different approach to the 32/64 bit problem compared to previous JNI bindings - spreading the logic between the C and Java code by having the C make the decisions about constructors but having the Java define the behaviour via abstract methods (it's more obvious than i'm able to describe right now). Well I got as far as some of CLContext but there's still a day or two's work to get it 'feature complete' so we'll see if I get that far.

Was a nice day today so after a shit sleep (i think the dentist hit some nerves with the injections and/or bruised the roots - the original heat-sensitive pain is gone but now i have worse to deal with) I decided to try to do some socialising. I dropped by some mates houses unannounced but I guess they were out doing the same thing but I did catch up with a cousin I haven't seen properly for years. Pretty good for just off 70, wish I had genes from his part of the family tree. Then had a few beers in town (and caught up with him and his son by coincidence) - really busy for a Sunday no doubt due to the Fringe.

Plenty of food-for-the-eyes at least. Hooley dooley.

Tagged biographical, hacking.
Saturday, 22 February 2014, 06:05

Small poke

Started moving some of my code and whatnot over to the new pc and had a bit of a poke around DuskZ after fixing a bug in the FX slide-show code exposed by java 8.

I'm just working on putting the backend into berkeley db. Took a while to remember where I was at, then I made some changes to add a level of indirection between item (types) where they exist (on-map or in-inventory). And then I realised I need to do something similar for active objects but hit a small snag on how to do it ...

I'm trying to resolve the relationship between persistent storage and active instances whilst maintaining the class hierarchy and trying to leverage indices and referential integrity from the DB. And trying not to rewrite huge chunks of code. I think i'm probably just over-thinking it a bit.

Also now have too many importers/exporters for dead formats which can probably start to get culled.

So yeah, quick visit but (still) need to think a bit more.

Tagged dusk.
Wednesday, 19 February 2014, 04:03

Kaveri 'mini' pc

Yesterday I had to go somewhere and it was near a PC shop I go to so I dropped in an ordered the bits for a new computer with a A10-7850K APU. I got one of the small antec cases (ISK300-150) - for some reason I thought it had an external PSU so I wasn't considering it. I'm really over 'mini' tower cases these days which don't seem too mini. Ordered a 256GB SSD - not really sure why now i think about it, my laptop only has a 100G drive and unless I have movies or lots of DVD iso's on the disk that is more than enough. 8G DD3-2133 ram, ASRock ITX board. Hopefully the 150W PSU will suffice even if i need to run the APU in a lower-power mode, and hopefully everything fits anyway. Going to see how it goes without an optical drive too.

I guess from being new, using some expensive bits like the case, being in australia, and not being the cheapest place to get the parts ... it still added up pretty fast for only 5 things: to about $850 just for the computer with no screens or input peripherals. *shrug* it is what it is. As I suspected the guy said nobody around here really buys AMD of late. Hopefully the HSA driver isn't too far away either, i'm interested in looking into HSA enabled Java, plain old OpenCL, and probably the most interesting to me; other ways to access HSA more directly (assuming linux will support all that too ...). Well, when I get back into it anyway.

Should hopefully get the bits tomorrow arvo - if i'm not too rooted after the "root canal stage 1" in the morning (pun intended). I'll update this post with the build and os install - going to try slackware on this one. I'm interested to see EFI for the first time and/or if there will be problems because of it; i'm not fan of the old PC-BIOS and it's more than about time it died (asrock's efi gui looks pretty garish from pics mind you). Although if M$ and intel were involved i'm sure they managed to fuck it up some how (beyond the obvious mess with the take-over-your-computer encrypted boot stuff. I'm pretty much convinced this is all for one purpose: to embed DRM into the system. I have a hunch that systemd will also be the enabler for this to happen to GNU/Linux. Making life difficult for non-M$ os's was just a bonus.)

PS This will be my primary day-to-day desktop computer; mostly web browsing + email, but also a bit of hobby hacking, console for parallella, etc.


Dentist was ... a little disturbing. He wasn't at all confident he was even going to be able to save the tooth until the last 10 minutes after poking around for an hour and a half. He was just about to give up. It was for resorbtion - and amounted to a very deep and very fiddly filling that went all the way through the top and out the side below the gum-line. Apart from being pretty boring it wasn't really too bad except for a couple of stabs of pain when he went into the nerves before blasting them with more drugs ... until the injections wore off that is. Ouch - I think mainly just bruising from the injections. Well I hope it was worth it anyway and it doesn't just rot away after all that, even with a microscope I don't know how he could see what was going on. Have to go back in 3 months for the root canal job :-( That's the expensive one too.


Anyway, had a few beers then went and got the computer bits.

Case is an antec ISK300-150. The in-built PSU is about 1/4 the size of a standard ATX PSU.

Motherboard is ASRock FM2A88X-ITX - haven't bought one for a while so it seems to have an awful lot of shit on it. Not sure what use hdmi in is ...

And everything fits fairly well. The main pain was the USB3 header connector which is 2 fat cables and a tall connector. This the first SSD i've installed and it's interesting to see how small/light they are. The guy in the computer shop was origianlly going to sell me some cruicial ram but i went with a lower-profile g.skill one - and just as well, I don't think the other would have fit.

Apart from that everything fits in pretty easy (I might cable-tie some of the cables to the frame though). I updated the firmware using the network bios update thing - which was nice.

Then I booted the slackware64 usb image, created the partitions using gdisk, and started installing directly from my ISP's slackware mirror. A bit slower than doing it locally but I'm in no rush.

So far it's so boringly straightforward there's nothing really to report. I presume the catalyst driver will be straightforward too.

I have an old keyboard I intend to use and i was surprised the mobo comes with a PS/2 socket (I was going to use a usb converter). I got it at a mysterious pawn shop one saturday afternoon - mysterious because i've never been able to find it again despite a few attempts looking for it. I must've wildly mis-remembered where it was. It's got a steel base and no m$ windoze keys.

Time passes ... (installs via ftp) ...

Ok, so looks like I did make a mistake: one must boot in EFI mode from the USB stick for it to install the EFI loader properly. Initially it must've booted using BIOS mode automagically so it didn't prompt for the ELILO install. I just rebooted from the stick and ran setup again. It setup EFI and the bios boot menu fine.

And it took me a little while on X - the APU requires a different driver from the normal ones (search apu catalyst driver). And ... well my test monitor with a HDMI to DVI cable had slipped out a bit and caused some strange behaviour. It worked fine in text mode and for the EFI interface, but turned off when X started (how bizarre). Once I seated it properly it worked as expected. Now hopefully that HSA driver isn't too far away.

Now i've got it that far I don't feel like shuffling screens and cables around to set it up, maybe tomorrow.


Must've had too many coffees yesterday at the pub, I ended up installing the box in a 'temporary' setup and playing around till past midnight.

I've got another workstation on the main part of the desk so i'm just using the return which is only 450mm deep - it's a bit cramped but I think this will be ok - not sure on the ergonomics yet. This is where I had my laptop before anyway. There may be other options too but this will do for now.

And yeah, I really did buy some 4:3 monitors although I got them a few years ago (at a slight premium over wide-screen models). For web or writing or pretty much anything other than playing games or watching movies, it's a much better screen size. These 19" models have about as much usable space as a 24" monitor in much less physical area and even a higher resolution at 1600x1200.

I also had a bit of a play with the thermal throttling and so on. With no throttling it gets hot pretty fast - the AMD heatsink is a funny vertical design that doesn't allow cross-flow from the case fan so it doesn't work very well. And it's radial design also seems to cause extra fan noise when it ramps up. The case fan is a bit noisy too. If i turn it up to flat out it will cause the cpu fan to slow down to a reasonable level - so I guess I could operate it that way if I really wanted the speed.

Throttling at 65W via the bios seems a good compromise, I can set the case fan to middle-speed (or low if i'm not doing much) and the machine is only about 10-15% slower (compiling linux).

I knew it was going to be a compromise when going for such a small case so this is ok by me.


Hmm, maybe I spoke too soon - although the X driver is working, GL definitely isn't. For whatever reason GL seemed to point to the wrong version (libGL.so.1.2 points to fglrx but libGL.so.1.2.0 was the old one).

But fixing that ... and nothing GL works at all. Just running glxinfo causes artifacts to show up and anything that outputs graphics == instant (X) crash.

Trying newer kernels.


Initially I had no luck - i built a 3.12.12 kernel using the huge config from testing/; the driver build fails due to the use of a GPL symbol. It turns out that was because kernel debugging was turned on in that config. Removing that let me build the driver.

While I was building 3.12.12 I also tried 3.13.4 ... But the driver interface wont build with this one and it looks like it needs a patch for that. Or I missed some kernel config option in the byzantine xconfig (there's something that definitely hasn't improved over the years).

So with 3.12.12 and a running driver GL still didn't seem to work and crashed as soon as I started any GL app. I was about to give up. Then as one last thing I tried turning on the iommu again; and viola ... so far so good. Or maybe not - that lasted till the next reboot. Tried an ATX PSU as well. No difference.

Blah. I have no idea now?

Then I saw a new bios came out between when i updated it yesterday and today so I tried that.

Hmm, seems to be working so far. I reset the bios to defaults (oops, bad idea, it wiped out the efi boot entry), fixed the boot entry, fixed the ram speed (it uses 1600 intead of 2133). Doesn't need the iommu on. And doesn't seem to need thermal throttling to keep it running ok. So maybe it was a bung bios.

Bloody PeeCees!

I decided to do a little cleanup of the cables to help with airflow and tidy up the main volume. It had been getting a bit warm on one of the support chips (the heatsink on left corner of mobo). The whole drive frame is a bit of a pain for that tiny SSD drive.

Since the BIOS update it's been running a lot cooler anyway. In general use I might be able to get away with the case fan on it's lowest setting with all the default BIOS settings.


Update: Been running solid for the 3 days since I put it back together. During 'normal use' the slowest fan setting is more than enough and it runs quiet and cool (normal use == browsing 20+tabs, pdf viewers, netbeans, a pile of xterms). And quite novel having a 'suspend to ram' option that works reliably on a desktop machine (like i said: been a long time since i build a pc, and that just didn't work properly). Yay for slackware!

Tagged biographical.
Monday, 17 February 2014, 10:47

Well that kinda sucked ...

Yeah so ..., nice birthday present. An hour in a dentists chair while he tries to cause pain repeatedly - to isolate what the problem was. Apparently my pain threshold is lower than it should be because I don't go to the dentist regularly (somehow I don't follow that logic; and/or just as well I never fucking went to the dentist if getting used to sharp pain is one side-effect; fucking a hurt a lot more than a broken arm that's for sure). And after all that the original dentist had the correct diagnosis - the specialist just kept saying how unusual it all was. Just what one wants to hear ... Just as well humans can't actually remember pain.

Apparently the sleep apneoa device can't be a cause of problems, and otherwise I have rather robust cavity-free teeth (which i'm pretty pleased with given how long it's been since i've been to a dentist).

Anyway, now queued up for an hour long operation later in the week to do some pretty nasty drilling which basically kills the inside of the tooth. What can you do eh ...

Then I did a bit of a pub crawl on the way home. Probably should do that more often if only to perve on the hot pretty things walking past.

I no longer have a mobile so I had no way to ping my friends (yes i do have some) to catch up for a birthday drink; so it just turned into a pretty depressing and isolated few hours in the end. I wasn't sure how I was going to end up after the appointment so I didn't organise anything in advance and I haven't been out for ages either.

Monday, 17 February 2014, 03:53

slackware update oops

So I decided to update one of my old laptops the other day (IBM Thinkpad T40), I only use it for web browsing and it's running slackware 14.0.

It's CPU is quite old and doesn't support PAE kernels ...

But for some reason slackpkg decided to change to the PAE kernel when it ran lilo. Actually it's kind of funny it still uses lilo; I thought that died a decade ago. By luck I found the install DVD relatively easily and managed to boot into single-user mode against the on-board HDD and point lilo to the correct kernel. Although booting off dvd was a bit flakey - i had to power down and disconnect the mains beteen reboots otherwise the screen stayed black.

It's only got 512MB RAM which sadly isn't really enough to do much these days. I looked into buying some more SODIMMs but it looks like it isn't worth it (around here, if you can even find PC2100 SODIMMs). For a 10 year old machine it still functions pretty well otherwise. Not sure it's worth upgrading the memory on my X61 thinkpad either, and the fan seems to be getting worse - I don't want to have have to pull the whole thing apart to see if i can fix that.

I've been contining to look into getting a small-as-possible ITX Kaveri machine going to replace my day-today use of the X61. At first I was initially dissapointed in the sizes of case available but there are one or two that will probably suffice - with psu, heatsink, hdds and air-flow you just can't make it too small. Unless gigabyte come out with a Kaveri based BRIX anyway. Or I get keen enough to make my own case using a low-profile PSU. Most PC shops around here are all intel so the AMD stuff isn't that common, although it's still possible to get it. The fanless heatsink cases (can't remember the brand) looked interesting, until I realised they needed an external power brick and cost a bit too much. Not particularly attractive either. I have another workstation but that's in a less convenient room; and is pretty much relegated to a shitty/unreliable mythtv server atm so that can stay there (not sure why i bother, i haven't watched any recordings from it for months).

But for now i'm more pre-occupied with a dental issues. After some fuckups when I got braces back in my youth I don't have much enthusiasm for dentists but after having a problem that wasn't fixing itself I finally went to a dentist (after, err, 25 years or something) and found out I need root canal work done; well whatever, so long as it just gets fixed. Seeing a specialist in a couple of hours. My shitty teeth have always been a pita since I was a kid and i have a feeling this wont be the last of it (and looking back, i'm sure it affected my life trajectory somewhat. There's a reason i only smile when i'm drunk), and i'm pretty sure that the sleep apnoea device didn't help. At least the local dentist was quite good.

Tagged biographical.
Wednesday, 12 February 2014, 07:21

javafx + internet radio = sigh

I thought i'd look at porting the android internet radio player I have over to JavaFX; although jjmpeg is an option I thought i would first try with the JavaFX MediaPlayer. Thought it might be a simple distraction on a hot day.

Unfortunately it's a no go so far.

Firstly it just doesn't accept the protocol from the shoutcast server that i'm using: it sends a response with ICY 200 OK rather than HTTP/1.1 200 OK. Because mpeg is a streaming protocol normally players just ignore that and keep going even if they aren't aware of the streaming protocol (which they normally are).

Then I realised it requires ffmpeg 0.10 libraries to function (the same version jjmpeg/head uses, oddly enough - which seems pretty odd for such a new product) - so I pointed to my local build of those and at least got it playing local mp3 files ...

So since I had that going I hacked up a quick proxy server (it's something i wanted to look at anyway wrt android; so the player can extract the current song info from the stream) and after mucking about with some silly bugs I managed to get it to the point of segfaulting. If I save the content of the stream it will play that ok it just seems to have trouble with loading from a network. The proxy server just rewrites the ICY header response to say HTTP/1.1 instead.

Steaming Poo.

I'm using the latest jdk 1.8.0 release candidate as of 12/2/14. I suspect it's a version compatability issue with my ffmpeg build or it could just be a bug in the media code - given it works with local files and particularly since it's using gstreamer: the second would be no surprise to me at all because gstreamer is a pile of shit.

   Stack: [0x84dad000,0x855ae000],  sp=0x855ad1e4,  free space=8192k
   Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
   C  [libfxplugins.so+0x8d28]  cache_has_enough_data+0xc
   C  [libfxplugins.so+0x74b8]  progress_buffer_loop+0xbd
   C  [libgstreamer-lite.so+0x6bdd5]  gst_task_func+0x203
   C  [libgstreamer-lite.so+0x6cebd]  default_func+0x29
   C  [libglib-2.0.so.0+0x6c3a1]  fileno+0x6c3a1
   C  [libglib-2.0.so.0+0x69bd0]  fileno+0x69bd0
   C  [libpthread.so.0+0x5e99]  abort@@GLIBC_2.0+0x5e99

My proxy code is also totally shit so that's always a possible cause but it shouldn't cause a crash regardless.

Update: The above was on an x86 machine, also tried on an amd64 isa and it did the same. So probably version incompatabilities. Unfortunately those versions (at least) of ffmpeg can be compiled in binary incompatible ways even if the library is the same revision, and if it's been built against ubuntu or debian, well they like to mess around with their packages.

Update 28/2/14: So I thought perhaps it was something to do with the debian mess of using libav instead of ffmpeg. But trying that results in the same crash. I also tried the libavcodec.so.53 that comes with the gst-ffmpeg-0.10.13 package ... but that doesn't include avcodec_open2(). Hmm, so much for version numbers eh.

Also tried ffmpeg 0.8.x and 0.9.x. But they all just crash in the same place.

On my new pc I also tried using the version of 'libavcodec.so.53' that comes with gst-ffmpeg-0.10.13 ... but that 'version 53' of ffmpeg doesn't include libavcodec_open2(). Actually it looks like that is also using libav for fucks sake. Oh, so it seems that it's actually using ffmpeg 0.7.2 instead. How odd.

What were oracle thinking if it is built against libav - a buggy and insecure fork of another project and a version that is well beyond maintenance at that.

I guess I will keep trying different versions and see if one sticks. Pretty fucked up though - libraries have a version for a reason and so what's the point if they don't actually mean anything. Possibly the situation was compounded by the whole libav fork. Or it might just be a bug with gstreamer-lite.

Time passes ...

Ok, so 0.8.x and 0.9.x also provide 'libavcodec.so.53' ... and they all crash in the same place, so quite probably it's just the code in libfxplugins.

Shit, I even tried building openjfx ... but even though it has the source-code for libfxplugins.so ... it doesn't seem to build it and just copies it from the installed jre. It doesn't help that it uses gradle which is both slow and opaque.

See follow-up post.

Tagged hacking, java, javafx, jjmpeg.
Tuesday, 11 February 2014, 06:30

Habanero + Lime Cordial

So I finally got off my arse and made it to the beach on a week-day today. 1/2 hour easy ride (it's 41 today, too hot to rush) although I left it a bit late and caught some of the after-school traffic on the way home. Water was clear and cool and there more people down there than I would have expected - not that any went into water deeper than their chest and some people were sun-baking (the sun is so friggan hot wtf would you want to sit right out in it for? In it??). Saw a couple of dolphins swim past slowly about 40m further out.

Anyway 1/2 hour ride home is enough to get pretty warm so I went for a cool drink (reehydration before I start on some beer) and I only had an experimental bottle of lime cordial I made last time - I dropped half a large ripe red habanero chilli into the bottle when I sealed it. I wasn't sure if it would really be what I was after.

It's ... certainly ... interesting.

As you're drinking it it is a typical cool refreshing tangy lime flavoured drink. And then you stop. Your mouth, lips, and throat instantly starts to gently and delicately burn.

So it makes you want to have more ...

... Ahh, nice cool refreshing tangy drink. And then you stop. The burning just intensifies.

Which makes you want even more.

Wow I said. Definitely something I'll do again next time I get some limes.

And as with most habanero based heat the burning just keeps increasing the more you have, I guess the capsaicin must get stuck in your soft-tissues for a long time. I know after cutting a lot up i've had burning fingers for a few days despite aggressive soapy scrubbing - your fingertips burn quite notably (to the point of pain) when you press your fingers together and the harder you press the hotter they feel. And it hasn't affected the flavour as sometimes chillies add a capsicum note although habaneros have thier unique sweet flavour so it is probably just complementing the sugar.

Tagged cooking.
Newer Posts | Older Posts
Copyright (C) 2019 Michael Zucchi, All Rights Reserved. Powered by gcc & me!