About Me

Michael Zucchi

 B.E. (Comp. Sys. Eng.)

  also known as Zed
  to his mates & enemies!

notzed at gmail >
fosstodon.org/@notzed >

Tags

android (44)
beagle (63)
biographical (104)
blogz (9)
business (1)
code (77)
compilerz (1)
cooking (31)
dez (7)
dusk (31)
esp32 (4)
extensionz (1)
ffts (3)
forth (3)
free software (4)
games (32)
gloat (2)
globalisation (1)
gnu (4)
graphics (16)
gsoc (4)
hacking (459)
haiku (2)
horticulture (10)
house (23)
hsa (6)
humour (7)
imagez (28)
java (231)
java ee (3)
javafx (49)
jjmpeg (81)
junk (3)
kobo (15)
libeze (7)
linux (5)
mediaz (27)
ml (15)
nativez (10)
opencl (120)
os (17)
panamaz (5)
parallella (97)
pdfz (8)
philosophy (26)
picfx (2)
players (1)
playerz (2)
politics (7)
ps3 (12)
puppybits (17)
rants (137)
readerz (8)
rez (1)
socles (36)
termz (3)
videoz (6)
vulkan (3)
wanki (3)
workshop (3)
zcl (4)
zedzone (26)
Tuesday, 13 August 2013, 11:57

Progress on object detection

I spent more hours than I really wanted to trying to crack how to fit a haarcascade onto the epiphany in some sort of efficient way, and I've just managed to get my first statistics. I think they look ok but I guess i will only know once I load it onto the device.

First the mechanism i'm using.

Data structures

Anyone who has ever used the haarcascades from opencv will notice something immediately - their size. This is mostly due to the XML format used, but even the relatively compact representation used in socles is bulky in terms of epiphany LDS.

struct stage {
    int stageid;
    int firstfeature;
    int featurecount;
    float threshold;
    int nextSuccess;
    int nextFail;
};

struct feature {
    int featureid;
    int firstregion;
    int regioncount;
    float threshold;
    float valueSuccess;
    float valueFail;
};

struct region {
    int regionid;
    int left;
    int top;
    int width;
    int height;
    int weight;
};

For the simple face detector i'm experimenting with there are 22 stages, 2135 features and 4630 regions, totalling ~150K in text form.

So the first problem was compressing the data - whilst still retaining ease/efficiency of use. After a few iterations I ended up with a format which needs only 8 bytes per region whilst still including a pre-calculated offset. I took advantage of the fact that each epiphany core will be processing fixed-and-limited-sized local window, so the offsets can be compile-time calculated as well as fit within less than 16 bits. I also did away with the addressing/indexing and took advantage of some characterstics of the data to do away with the region counts. And I assumed things like a linear cascade with no branching/etc.

I ended up with something like the following - also note that it is accessed as a stream with a single pointer, so i've shown it in assembly (although I used an unsigned int array in C).

cascade:
    .int    22                           ; stages

    ;; stage 0
    .short  2,3                          ; 3 features each with 2 regions
    ;; one feature with 2 regions
    .short  weightid << 12 | a, b, c, d  ; region 0, pre-calculated region offsets with weight
    .short  weightid << 12 | a, b, c, d  ; region 1, pre-calculated region offsets with weight
    .float  threshold, succ, fail        ; feature threshold / accumulants
    ;; ... plus 2 more for all 3 features

    .short  0,0                          ; no more features
    .float  threshold                    ; if it fails, finish

    ;; stage 1
    .short 2,15                          ; 15 features, each with 2 regions

This allows the whole cascade to fit into under 64K in a readonly memory in a mostly directly usable form - only some minor bit manipulation is required.

Given enough time I will probably try a bigger version that uses 32-bit values and 64-bit loads throughout, to see if the smaller code required for the inner loop outweights the higher memory requirements.

Overlays ...

Well I guess that's what they are really. This data structure also lets me break the 22-stage cascade into complete blocks of some given size, or arbitrary move some to permanent local memory.

Although 8k is an obvious size, as i want to double-buffer and also keep some in permanent local memory and still have room to move, I decided on the size of the largest single stage - about 6.5K, and moved 1.2K to permanent LDS. But this is just a first cut and a tunable.

The LDS requirements are thus modest:

    +-------------+
    |             |
    |local stages |
    |    1k2      |
    +-------------+
    |             |
    |  buffer 0   |
    |    6k5      |
    +-------------+
    |             |
    |  buffer 1   |
    |    6k5      |
    +-------------+

With these constraints I came up with 13 'groups' of stages, 3 stages in the local group, and the other 19 spread across the remaining 12 groups.

This allows for a very simple double-buffering logic, and hopefully leaves enough processing at each buffer to load the next one. All the other stages are grouped into under 6k5 blocks which are referenced by DMA descriptors built by the compiler.

This also provides a tunable for experimentation.

Windowing

So the other really big bandwith drain is that these 6000 features are tested at each origin location at each scale of image ...

If you have a 20x20 probe window and a 512x512 image, this equates to 242064 window locations, which for each you will need to process at least stage 0 - which is 6 regions at 4 memory accesses per region, which is 5 million 32-bit accesses or 23 megabytes. If you just access this directly into image data it doesn't cache well on a more sophisticated chip like an ARM (512 floats is only 4 lines at 16K cache), and obviously direct access is out of the question for epiphany.

So one is pretty much forced to copy the memory to the LDS, and fortunately we have enough room to copy a few window positions, thus reducing global memory bandwidth about 100 fold.

For simplicity my first cut will access 32x32 windows of data, and with a cascade window size of 20x20 this allows 12x12 = 144 sub-window probes per loaded block to be executed with no external memory accesses. 2D DMA can be used to load these blocks, and what's more at only 4K there's still just enough room to double buffer these loads to hide any memory access latency.

LDS for image data buffers:

    +-------------+
    |    SAT 0    |
    | 32x32xfloat |
    |     4k      |
    +-------------+
    |    SAT 1    |
    | 32x32xfloat |
    |     4k      |
    +-------------+

A variance value is also required for each window, but that is only 144 floats and is calculated on the ARM.

Algorithm

So the basic algorithm is:

    Load in 32x32 window of summed-area-table data (the image data)
    prepare load group 0 into buffer 0 if not resident
    for each location x=0..11, y=0..11
        process all local stages
        if passed add (x,y) to hits     
    end

    gid = 0
    while gid < group count AND hits not empty
        wait dma idle
        gid++;
        if gid < group count
            prepare load group gid into dma buffer gid & 1
              if not resident
        end
        for each location (x,y) in hits
           process all stages in current buffer
           if passed add (x,y) to newhits
        end
       hits = newhits
    end

    return hits

The window will be loaded double-buffered on the other dma channel.

Efficiency

So i have this working on my laptop stand-alone to the point that I could run some efficiency checks on bandwidth usage. I'm using the basic lenna image - which means it only finds false positives - so it gives a general lowerish bound one might expect for a typical image.

Anyway these are the results (I may have some boundary conditions out):

Total window load         : 1600
Total window bytes        : 6553600
Total cascade DMA required: 4551
Total cascade DMA bytes   : 24694984
Total cascade access bytes: 371975940
Total SAT access bytes    : 424684912

So first, the actual global memory requirements. Remember that the image is 512x512 pixels.

window load, window bytes

How many windows were loaded, and how many bytes of global memory this equates to. Each window is 32x32xfloats.

This can be possibly be reduced as there is nearly 1/3 overlap between locations at the expense of more complex addressing logic. Another alternative is to eschew the double-buffering and trade lower bandwidth for a small amount of idle time while it re-arranges the window data (even a 64x32 window is nearly 5 times more global-bandwidth efficient, but i can't fit 2 in for double buffering).

cacade DMA count, cascade DMA bytes

How many individual DMA requests were required, and their total bytes. One will immediately notice that the cascade is indeed the majority of the bandwidth requirement ...

If one is lucky, tuning the size of each of the cascade chunks loaded into the double buffers may make a measureable difference - one can only assume the bigger the better as DMA can be avoided entirely if the cascade aborts within 11 stages with this design.

Next we look at the "effective" memory requirements - how much data the algorithm is actually accessing. Even for this relatively small image without any true positives, it's quite significant.

It requires something approaching a gigabyte of memory bandwidth just for this single scale image!

Total SAT bytes

The total image (SAT) data accessed is about 420MB, but since this code "only" needs to load ~6.5MB it represents a 65x reduction in bandwidth requirements. I am curious now as to whether a simple technique like this would help with the cache hit ratio on a Cortex-A8 now ...

Total cascade bytes

The cascade data that must be 'executed' for this algorithm is nearly as bad - a good 370MB or so. Here the software cache isn't quite as effective - it still needs to load 24MB or so, but a bandwidth reduction of 15x isn't to be sneezed at either. Some of that will be the in-core stages. Because of the double buffering there's a possibility the latency of accessing this memory it can be completely hidden too - assuming the external bus and mesh fabric isn't saturated, and/or the cascade processing takes longer than the transfer.

LDS vs cache can be a bit of a pain to use but it can lead to very fast algorithms and no-latency memory access - with no cache thrashing or other hard-to-predict behaviour.

Of course, the above is assuming i didn't fuck-up somewhere, but i'll find that out once I try to get it working on-device. If all goes well I will then have a baseline from which to experiment with all the tunables and see what the real-world performance is like.

Tagged hacking, parallella.
Sunday, 11 August 2013, 08:47

Object detection, maths, ...

Curiosity got the better of me and I poked abit around the code side of my parallella board today.

Working on from a forum post about it, i started thinking about how to fit the viola & jones 'haar cascade' object detector into the epiphany, and I started looking at an assembly version of the inner loop. If I arrange the data appropriately and force some assumptions i can get it down to around 15 instructions per feature test which is pretty decent. And infact it can even run in the lower 8 registers and so is very compact too (16-bit encodings). I'm actually pretty confident I can get some decent efficiency out of it, although i'm not sure how that will translate to performance yet.

I have a few ideas how i can handle the large size of cascades fairly efficiently - first by having the lowest and most frequently accessed levels stored in LDS, and then either relying on their rarity to handle the upper stages, or using some pre-fetch mechanism. 32K is bloody tight though.

Although most of the calculations are simple mul + add and some comparisions, there is also a square root required per window. I can probably move the calculation of that to the ARM side (although then I would pretty much have to move all the scaling there too), depending on how fast the epihany ploughs through the window tests anyway.

I'm still not quite sure how the data flows between host and cu, but the examples should contain enough information to work that out.

fdiv

So I also looked into - and got pretty much distracted by - floating point divide. Something that is required if e.g. calculating the square root. Since the ephiany has neither reciprocal estimate or divide, one must implement it oneself.

I think I managed to implement the Newton-Raphson division mechanism from Wikipedia in about 40 instructions. Unfortunately there are a lot of data-stalls due to the feedback nature of the algorithm (and me not particularly wanting to hand-schedule the bits where there is leeway), but it runs in about 75 clock cycles with no divide by zero or other checks going on. A C implementation of the identical algorithm takes about 123, and using / in C takes about 131 (with -ffast-math, i'm not sure how the ieee error checks differ). Anyway it's not something vj needs that much of, so C would probably suffice. Divide seems to drag in a bit of libc too, whereas a stand-alone is very compact.

Update: not sure if libm was in external memory either actually, i would have to go back and have a look.

Instruction set

So these two little investigations helped expose me to some of the nuances of the instruction set. Such as the lack of a unary NOT - one must use EOR(~0) instead (which takes 3 instructions - 2 to load ~0, and the eor itself). The lack of bit-field instructions is no surprise, although that fact still doesn't make the pain of emulating them any more fun.

The offset addressing modes are nice in that the index is scaled to the data-size, which makes the 3-bit version quite a bit more useful than it might otherwise be.

Not having r15 as the PC has an obviuos side-effect: no direct way to implement PC-relative addressing and position independent code ... although I presume movfs can be used to load the PC to a register, and that can be used instead.

Tagged hacking, parallella.
Friday, 09 August 2013, 06:02

Wow my desk is dusty

Just a couple of shots of my shitty jury-rigged 'case' for my parallella rev-0.

Base-board is screwed to a CD-ROM with some 4mm irrigation pipe 'stand-off's. It's then sitting inside the upper-half of a 3.5" FDD case.

The fan is just sitting on the hole from the FDD motor/flywheel (the picture is with it turned on). It's powered from the 5v supply in a rather unreliable manner ...

It does the job anyway and seems to keep the chips quite cool. It's a 12v fan running off 5v so it's essentially silent too.

The camera really shows up the (emabarrsing level of) dust, as it's usually too dark for it to be quite so visible. Particularly as it's under one monitor/next to a laptop so my eyes are normally blinded by that.

Tagged parallella.
Wednesday, 07 August 2013, 01:10

Parallella rev-0

So I got my first parallella board yesterday. I don't have much to report other than that I plugged it in and it worked. The circuit board is literally credit-card sized and really packed with components - very impressive. With 1G ram and the dual-core arm the base cpu seems quite nippy too - I haven't run any benchmarks vs the beagleboard-xm, but it certainly feels significantly faster from the very limited amount of use so far (e.g. emacs seems usable even via remote X). Ubuntu/debian is a big turn-off though.

Unfortunately the early-adopter rev-0 boards are a bit sub-optimal and don't have working USB (by far the biggest amongst a few faults; hdmi doesn't work yet apparently but that's only a firmware issue and I don't have the correct cable yet in any event). And as I found with the beagleboard - without USB these things are severely limited, so i'm pretty bummed about that (I expect a future i/o expansion board will at least remedy this). I kind of made a mistake in that I really wanted 2x64 core boards but was worried about having import duty hassles. And because I was on leave and staying away from my laptop i missed the email offer to switch for these early boards until too late. Update: I should've checked the message board first, it seems USB should work at some point.

Anyway as one might notice from the lack of posts lately i've kind of gone off pretty much everything technology related. It's partly the weather, but also a change of perspective due to age, health, and so on. I had a few weeks off over that time and didn't touch any code at all, and didn't even feel like thinking about it much either. I did notice the new OpenCL spec which has some interesting features, but i've mostly just been reading online news/forums (but not participating) and playing some PS3 games - and whilst the hysteria and ignorance in the fora has had it's entertainment value, the sameyness is wearing thin.

So right now i'm not really too enthused just yet about poking the epiphany chip (let alone the fpga) or trying to provide feedback on the sdk/etc. Pity it didn't come on-time, when I was more into that shit (as i generally am over summer).

Back at work it looks like i'm on some pretty dull stuff for the next little bit too, which is pretty sapping.

Ideas

Update: responding to the comment below I thought i'd update the post ...

At this point I don't have anything specific I want to do with the board but as I think about it there are a few possibilities of things I might be interested in poking at ...

Whatever I do will probably limited to small learning experiments rather than long-term applications (that's what work is for). I need to set aside the time and find the enthusiasm to get started too. Time isn't the something i'm short of!

At least the more I think and write about it the more the desire builds to get stuck into it ...

Tagged parallella.
Wednesday, 12 June 2013, 04:32

Into the cloud!

So yeah i've been a bit bored/insomniacal[sic] lately and reading the nets ... and one topical topic is the next set of game consoles from microsoft and sony.

I still can't believe how much microsoft ball(mer)sed-up their marketing message, but I guess when you live in such a bubble as they do it probably seemed like a good idea at the time. Sad that sony gets such cheers for merely keeping things the same and letting people SHARE the stuff they buy with their hard-earned. But when microsoft intentionally avoid the 'share' word it isn't so surprising. Incidentally the microsoft used game thing reeks somewhat of the anti-trust trouble that apple are currently in; although they seem to have made an effort to ensure it was regulator-safe by a bit of weaselling in the way they structured it. i.e. they facilitated the screwing of customers without mandating it.

But back to the main topic - the whole 'it's 4x faster due to the cloud' nonsense.

Ok, obviously they shat bricks over the fact that the PS4 is so much faster on paper. The raw FPU performance is 50% better, but I would suggest that the much higher memory bandwidth (~3x) and the sony hardware scheduler tweaks will make that more like 100% faster in practice, or even more ... but time will tell on that. With 1080P framebuffers, 32MB vanishes pretty fast - it's only just enough for a 4x8-bit RGBA framebuffers, or say a 16-bit depth buffer and 1xRGBA colour and 1xRGBA accumulation buffer. With HDR, deferred rendering, and high fp performance and so on, this will be severely limiting and its still only relatively a meagre 100GB/s anyway. microsoft made a big bet on the 32MB ESRAM thinking sony would somehow over-engineer the design; and they simply fucked up (the dma engines are of course handy, but even the beagleboard has a couple of those and it can't make up bandwidth). As another aside, I lost a great deal of respect for anand from anandtech when he came up with the ludicrous suggestion that the 32MB of ESRAM might actually be a hardware associative cache. For someone who claims to know a bit about technology and fills his articles with tech talk, he clearly has NFI about such a fundamental computer architecture component. It suggests PR departments are writing much of his articles.

It learns from its mistakes?

So anyway ... most of the talk about the cloud performance boosting is just crap. Physics or lighting will not be moved to the internet because internet performance just isn't there yet - and the added overheads of trying to code it just aren't worth it. Other things like global weather could be, but it's not like mmo's can't already do this kind of thing and unless you're building 'Cyclones! The game!' the level of calculation required will be minimal anyway. However ... there is one area where I think a centralised computing capability will be useful: machine learning.

Most machine learning algorithms require gads and gads of resources - days to weeks of computing time and tons of input data. However the result of this work is a fairly dense set of rules that can then be sent back to the games at any time. Getting good statistical information for machine learning algorithms is a challenge, and having every machine on the network allows them to do just that, and then feed them back.

So a potential scenario would be that each time you play through a single player game, the AI could learn from you and from all other players as you go; reacting in a way that tries to beat you, at the level of performance you're playing at. i.e. it plays just well enough that you can still win, but not so that it's too easy. Every time you play the game it could play differently, learning as you do. We could finally move beyond the fixed-waves of the early 80s that are now just called 'set pieces'.

It would be something nice to see, although a cheaper and easier method is just to use multi-player games to do the same thing and use real players instead. So we may not see it this iteration: but I think it's about time we did.

Maybe some indie developer can give it a go.

sony or microsoft could also use the same technique to improve the performance of their motion based input systems, at least up to some asymptotic performance limit of a given algorithm.

But ...

However, microsoft have no particular advantage here as it isn't the 'cloud services' that are important here, it's just that every machine has a network port. Sure having some of the infrastructure 'done for you' is a bit of a bonus, but it's not like internet middleware is a new thing. There are a bevvy of mature products to choose from, and microsoft is just one (mediocre) player from many. And a 3rd party could equally go to any other 3rd party for the resources needed (although I bet microsoft wont let them on their platform: which is another negative for that device).

Actually ...

Update: Actually I forgot to mention that I really think the whole 'always online' and kinect-required gig in microsoft's case is all about advertising: it can see who is in a room, age, gender, ethnicity, and if you have an account registered even more details on the viewer. It could track what people in the room are doing during game-playing as well as watching tv shows and advertising; probably even where they're looking, what they're talking about, and what those tv shows/advertisements are. It doesn't take much more to "anonymously" link your viewing habits with your credit card transactions.

A marketer's wet dream if ever there was one. A literal "fly on the wall" in every house that has "one".

And if you think this is hyperbole, you just haven't been paying attention. Google (and others) already do all this with everything you do on the internet or on your phone, why should your lounge room be any different? I was pretty creeped out when I started seeing adverts that seemed to be related to otherwise private communications.

Let's just see how long before people start seeing advertising popping up (perhaps over the TV shows they're watching, or within/over games?) that matches their viewing habits and lounge-room demographics or what they were doing last night on the coffee table. And even if they don't get there "this generation", it's clearly a long-term goal.

So whist one can do some neat stuff with the network, that's just a side-effect and teaser for the main game. Exactly like google and all it's "free" services. Despite paying for it this time, you're still going to be the product. Even the DRM stuff is a side-show.

Prices

Well as usual $AUS gets shafted by the 'overheads' of the local market. But you know what? Who cares. They're both cheaper than the previous models even in face-value-dollars let alone real ones, and we don't treat our less fortunate workers as total slaves in this country (at least, not yet).

They might still be a luxury item but in relative terms they've never been cheaper. My quarterly electricity bill has breached $500 already and it's only going up next year.

The initial price of the device is always only a part of the cost (and for-fucks-sake, it is NOT a fucking investment), and pretty small part with the price of games, power, the tv/couch, and internets on top.

I'm still not sure if i'll get a ps4: given the amount of games i've played over the last 12 months it would be pretty pointless. I've still got a bunch of unopened ps3 games - I think I just don't like playing games much, they're either too easy and boring, too much like work, or I hit a point I can't get past and I don't have the patience to beat a dumb computer and feel good about it (and i'm just generally not into 'competition', i'd rather lose than compete). A PS4 CPU+memory in a GNU/linux machine on the other hand could be pretty fun to play with.

Tagged games, rants.
Monday, 10 June 2013, 03:32

Clamping, scaling, format conversion

Got to spend a few hours poking at the photo-effects app i'm doing in conjunction with 'ffts'. I ended up having to use some NEON for performance.

One interesting solution along the way was code that took 2x2-channel float sequences (i.e. 2xcomplex number arrays) and re-wound them back to 4-channel bytes, including scaling and clamping.

I utilised the fixed-point variant of the VCVT instruction which performs the scaling to 8 bits with clamping below 0. For the high bits I used the saturating VQMOVN variant of move with narrow.

I haven't run it through the cycle counter (or looked the details up) so it could probably do with some jiggling or widening to 32 bytes/iteration but the current main loop is below.

        vld1.32         { d0[], d1[] }, [sp]

        vld1.32         { d16-d19 },[r0]!
        vld1.32         { d20-d23 },[r1]!     
1:
        vmul.f32        q12,q8,q0               @ scale
        vmul.f32        q13,q9,q0
        vmul.f32        q14,q10,q0
        vmul.f32        q15,q11,q0

        vld1.32         { d16-d19 },[r0]!       @ pre-load next iteration
        vld1.32         { d20-d23 },[r1]!

        vcvt.u32.f32    q12,q12,#8              @ to int + clamp lower in one step
        vcvt.u32.f32    q13,q13,#8
        vcvt.u32.f32    q14,q14,#8
        vcvt.u32.f32    q15,q15,#8

        vqmovn.u32      d24,q12                 @ to short, clamp upper
        vqmovn.u32      d25,q13
        vqmovn.u32      d26,q14
        vqmovn.u32      d27,q15

        vqmovn.u16      d24,q12                 @ to byte, clamp upper
        vqmovn.u16      d25,q13

        vst2.16         { d24,d25 },[r3]!

        subs    r12,#1
        bhi     1b

The loading of all elements of q0 from the stack was the first time I've done this:

        vld1.32         { d0[], d1[] }, [sp]

Last time I did this I thing I did a load to a single-point register or an ARM register then moved it across, and I thought that was unnecessarily clumsy. It isn't terribly obvious from the manual how the various versions of VLD1 differentiate themselves unless you look closely at the register lists. d0[],d1[] loads a single 32-bit value to every lane of the two registers, or all lanes of q0.

The VST2 line:

        vst2.16         { d24,d25 },[r3]!

Performs a neat trick of shuffling the 8-bit values back in to the correct order - although it relies on the machine operating in little-endian mode.

The data flow is something like this:

 input bytes:        ABCD ABCD ABCD
 float AB channel:   AAAA BBBB AAAA BBBB
 float CD channel:   CCCC DDDD CCCC DDDD   
 output bytes:       ABCD ABCD ABCD

As the process of performing a forward then inverse FFT ends up scaling the result by the number of elements (i.e. *(width*height)) the output stage requires scaling by 1/(width*height) anyway. But this routine requires further scaling by (1/255) so that the fixed-point 8-bit conversion works and is performed 'for free' using the same multiplies.

This is the kind of stuff that is much faster in NEON than C, and compilers are a long way from doing it automatically.

The loop in C would be something like:

float clampf(float v, float l, float u) {
   return v < l ? l : (v < u ? v : u);
}

    complex float *a;
    complex float *b;
    uint8_t *d;
    float scale = 1.0f / (width * height);
    for (int i=0;i<width;i++) {
       complex float A = a[i] * scale;
       complex float B = b[i] * scale;

       float are = clampf(creal(A), 0, 255);
       float aim = clampf(cimag(A), 0, 255);
       float bre = clampf(creal(B), 0, 255);
       float bim = clampf(cimag(B), 0, 255);

       d[i*4+0] = (uint8_t)are;
       d[i*4+1] = (uint8_t)aim;
       d[i*4+2] = (uint8_t)bre;
       d[i*4+3] = (uint8_t)bim;
    }

And it's interesting to me that the NEON isn't much bulkier than the C - despite performing 4x the amount of work per loop.

I setup a github account today - which was a bit of a pain as it doesn't work properly with my main browser machine - but I haven't put anything there yet. I want to bed down the basic data flow and user-interaction first.

Tagged android, beagle, code, hacking, picfx.
Friday, 24 May 2013, 02:20

on google

So google have decided to disable downloads on google code.

So I have decided to stop using it.

... although as yet I have no concrete plans or timeline for when this decision will take effect.

Whilst they claim it's about abuse, one can only assume that is just a "likely-sounding excuse" for what in reality is just another straight-up lie from the PR department of a supra-national conglomerate, and it's really just a way to cut costs and promote their 'drive' service (a useless microsoft/apple only service as far as i'm concerned).

Nobody seems to have reported that they have also gimped their POP interface to gmail a couple of days ago. No more UID support. This makes POP a lot less reliable/useful as a mail store (although in honesty it was never designed for that purpose). I proceeded to delete all the mail in gmail to help them free up some disk space.

I guess over-all the writing is on the wall. We all know that at some point 'google account' will mean 'google+', and blogger may be retired at any time.

So it seems my on-going-but-totally-lax search for alternatives to 'everything google for convenience' just got another big kick up the rump-side.

As my projects are all pretty small and low-volume I might look at a local solution because every network based solution faces the same problem. I have a couple of beagleboards doing nothing although getting a running and secure-enough system might be more pain than it's worth.

It's a bit of a pain to have to deal with.

Tagged beagle, dusk, hacking, imagez, java, javafx, jjmpeg, mediaz, pdfz, puppybits, rants, readerz, socles, videoz.
Tuesday, 21 May 2013, 09:03

on build systems

So i'm kind of baffled by gradle.

"power and flexibility of ant" with [enforced] "conventions of maven".

Sounds like it cherry picked the two worst parts of both outside of using XML!

Actually it looks ok enough for simple projects, but then again pretty much every tool is because solving simple problems is always ... simple. However I think the decision to go with implementing it in a scripting language is just going to lead to some pretty nasty long-term maintenance problems.

The only valid argument for something like ant is that the configuration files are machine readable (even if they aren't human readable!), which can lead to tooling support (ok, ant isn't very machine readable anyway, i'm just stating that it could be valid if they did it right). So it's kind of strange that gradle eschews that for something which is about as parseable as a batch file.

Of course it's the flash new kid on the block so it will go through a rapid adoption phase, but like every other tool before it cracks will then start to appear.

I'm also a little baffled by the claim that somehow groovy is just java and so it's easier for java developers. Doesn't look anything like java to me. At all. Actually even if it were true, i think that would be a problem not a benefit. Java is just not the right language to use for the problems that build systems solve.

At least it's better than ant, but that's a pretty low bar. At best ant isn't much better than a 'build-all.sh' file, and demonstrably worse in many ways.

automake

I've put a few hours into getting somewhere on the java automake stuff. However I seem to have got stuck in an extended discussion on how a zip file works. The java build process is so simple I don't think anyone who is only familiar with C can grasp it.

I guess the main impression I get is that there isn't a particularly strong desire for simplicity vs 'the way we do it', which is a bit frustrating. If I end up with something I wouldn't want to use myself there doesn't seem much point. And given that in the intersection of the sets of 'i write java' and 'i want to use makefiles' and 'using automake isn't utterly and completely out of the question' i'm probably one of about a dozen unique and beautiful snowflakes, there isn't much hope if i'm not interested myself. Actually i may not use it anyway.

So although earlier I was more optimistic now i'm not sure where it's headed. I have some fragments which do part of the job but given the difficult i've had in explaining this simple external stuff i'm not sure I'm mentally up to trying to create and then explain any code inside automake.in. I'm not really that thrilled with the idea of trying to provide a complete patch anyway.

Most (big) projects seem to want every potential contributor to kow-tow to the whims of some god-like maintainer as if it's you the one who should feel privileged that they should deign to even entertain the idea of you doing free work for them. I'm ashamed this is exactly how we did things in Evolution and now regret it. There's quite a difference between a casual contribution and a long-term maintainer. I have no idea if automake is like that, but my patience threshold is pretty low these days so it wouldn't have to be for me to suddenly not to give a shit (i get paid to put up with crap, it's not something I need to volunteer for).

Tagged rants.
Newer Posts | Older Posts
Copyright (C) 2019 Michael Zucchi, All Rights Reserved. Powered by gcc & me!