Nice curves!
Wow what a page.
Michael Zucchi
B.E. (Comp. Sys. Eng.)
also known as Zed
to his mates & enemies!
< notzed at gmail >
< fosstodon.org/@notzed >
Although i haven't been posting about it i've been continuing to poke around in bits and pieces of code. Well, a little bit.
I did a bit of OpenCL last week and that was pretty fun. I had enough time to really dig into optimising a particular routine and was down to inspecting the ISA output from the driver. Good stuff. The GCN isa is pretty foreign to me so I had to use small snippets to isolate operations of interest.
For example one construct that comes up repeatedly when parallelising code is using a divide and modulus operator when splitting up a non-work-sized job into work-group sized blocks.
int block_size = info.block_size; for (int id=get_local_id(0); id < limit; id+=64) { int block_no = id / block_size; int block_index = id % block_size; // do work }
Where possible one just chooses a power of 2 so this is a simple shift and mask, or integer divide by a constant isn't too bad as it can usually be optimised by the compiler. But this problem required a dynamic block size that wasn't a power of 2.
The solution? Use floating point multiply the reciprocal which can be calculated efficiently or here off-line. The problem is that this introduces enough rounding error to be worthless without some more work.
I must admit I just found the solution empirically here: i had a limited range of possible values so I just exhaustively tested them all against a couple of guesses. Hey it works, i'm no scientist.
float block_size_1 = info.block_size_1; for (int id=get_local_id(0); id < limit; id+=64) { int block_no = (int)(id * block_size_1 + 1.0f / 16384); int block_index = id - (block_no * block_size); // do work }
This replaces the many instruction integer division decomposition with a convert+mad+convert.
On some work-loads this was a 25% improvement to the entire routine and these 2 lines are in an inner loop of about 50 lines of code.
Well it's been fun to play at this level again - its ... mostly ... pointless going to this level but just adds to the toolkit and I enjoy poking. Maybe one day i'll have a job where it's useful.
I gave zcl a go on this as originally I was thinking of trying some OpenCL 2 stuff but I may not bother now. Given the lack of use/testing it was pretty much bug free but I started filling out the API with some more convenient entry points. I also decided to add some more java-array interfaces here and there: they're just too convenient and it hides the mess in the C even if they might not be the most efficient in all cases.
This is the sort of thing i'm talking about:
float[] data = new float[] { 1, 2, 3, 4, 5 }; CLBuffer buffer = cl.createBuffer(CL_MEM_COPY_HOST_PTR, data); vs float[] data = new float[] { 1, 2, 3, 4, 5 }; ByteBuffer bb = ByteBuffer.allocateDirect(data.length * 4).order(ByteOrder.nativeOrder()); bb.asFloatbuffer().put(data); CLBuffer buffer = cl.createBuffer(CL_MEM_COPY_HOST_PTR, data.length * 4, bb);
It's only two fewer lines of code ... but yeah that shit gets old fast. The first is more efficient too because this is a native method and it avoids the copy. In the first case CL_MEM_USE_HOST_PTR throws an exception though, and in the second it works (library call permitting).
The main downside is adding these convenience calls blows out the method count very quickly if you support all the primitive types - which detracts from the ease of use they're supposed to increase.
Another release? Who knows when.
And this week i've been poking at some OpenGL. My it's grown. I'm experimenting using JOGL for this although i'm not a fan of some of it's binding choices. It's crossed my mind but i'm pretty sure i don't want to create yet another binding as in a 'ZGL'. Hmm, I wonder if vulkan will clean up the cross platform junk from opengl.
Unfortunately my home workstation seems to have developed a faulty DIMM or something (unrelated note of note).
So i've mentioned it a few times on here - i've been steadily losing weight since February. It's settled now at about 74kg - in part because it just seemed to stop on it's own and its enough off so i'm no longer trying to actively lose weight. It is surprising to me how little food it takes to maintain this so far but i haven't been exercising a lot either.
I'm not sure if the initial trigger was all mental or all physical but in part the shock of getting gout and a full freezer and pantry bereft of 'gout friendly' foods kicked off a period of simply being able to lose weight on whim. I guess I somehow shrank my stomach enough to change how my body detects hunger and once it started I just ran with it - i can't say I tried too hard but I wanted to lose a bit for years. I still get hungry, it just doesn't really bother me like it used to and turn into an overwhelming need to eat. I did a few experiments in the midst to see how little I could get away with and it wasn't hunger that drove me to eat but a sore stomach.
At 92kg I was sadly bang on average for a bloke my age; but that is a lot more than my body can take. Although the belt was the first indication of making progress it wasn't till i broke under about 84kg that I looked (to myself) like I was making any progress; that's probably about when my waist to hip ratio broke under 1.0. After that each kg seemed noticeable one way or another.
I don't feel great or anything, but I would be lying if i didn't say I felt `less shit'.
A couple of friends (but thankfully not all!) are already saying I "need to eat" like i'm underweight or something. I'm quite some way from that even if i'm below the average these days. I was skinny until I got a job in the city and started having rich lunches and regular Friday drinks together with a shorter commute cycle and it's slowly accumulated since.
SBS has had a series of diet/health related shows on recently and out of curiosity I looked up some of the stuff i've been typically eating.
Apart from a very low total number of joules around 2/3 of the energy was from fats and oils. Protein was low. I think "diet", "light", "lite" and "low fat" foods are complete nonsense so I certainly wasn't having any of that.
Just bread and butter ended up being a big part of what I was eating (and i don't hold back on the butter). A few spuds and some rice at times. Lots of nuts, mostly almonds. Matured cheddar cheese. Lemons and limes as they are in season. Almost no meat, some but not much grog. Lots of coffee (usually black+none) and tea (green, or white+1/4). But I also had what herbs i could find in the garden mostly in tom-yum-like soup, and plenty of chillies and other random stuff along the way (the chillies were important in one critical way beyond making the bland more palatable). Lots and lots of water.
This isn't what I had all of the time but it was in the majority. It's also not something i'll be continuing but it was certainly effective at losing some weight this time.
One thing I did notice is that big dinner means big morning hunger. I'd already noticed this before but i'm now more convinced of it. If you ignore it it just goes away but it's also easy to eat less at night.
This episode of gout also enforced just how useless the internet has become as a source of general information. Almost all the gout 'advice' is useless, being generous. Even the stuff from established medical sources wasn't terribly applicable to me due to being neither 70 nor obese.
For example low-fat is always recommended: but it's never specified whether that has anything (at all) to do with gout or merely just being overweight.
Anyway i'm glad I lost the weight, I wonder if it will stay off this time?
Update: Well 6 weeks later and it's still dropping - 72-73Kg now. I'm eating properly now too - probably better than i have for years and certainly not going hungry.
Update: 14 weeks from the first post and it kept dropping slowly - bang on 70kg has been the base for a couple of weeks now. Damn I haven't been this skinny in a long long time and i like returning to it. Except the fucking arthritis keeps coming back. So despite the good, actually the present is very dark and the future looks even more grim.
So my sore foot just kept getting worse despite all efforts of rest so I returned to a doctor. One quick look and he just said 'gout' and prescribed some drugs.
I was a little sceptical and the list of side-effects of the colchicine was a little off-putting but I gave it a go. Well within a few hours all the pain was gone and within a day so was the redness and swelling of both feet.
I guess what I had the last couple of winters was also gout - even though it didn't really appear that way.
Drugs were the last thing I wanted but lifestyle wasn't doing it so that's that I guess. It's probably still going to take a while to get the dosages and medications correct but at least this rules out everything else and has me mobile enough to get back to living.
Despite the weather last weekend I hit the road for a ride intending to just visit friends but nobody was home so ended up doing the 65km round-trip city to coast triangle. It was cold and windy and I took it pretty easy (i'd only just taken the drugs a couple of days before) so it took me over 3 hours and fortunately I missed the rain. Despite freezing my knees and toes (the rest was rugged up adequately) it was more enjoyable than I expected.
Now, if only winter would end ... it's been bitterly cold this year.
Update: Through the last 3 weeks i had some symptoms return a couple of times. Taking some colchicine cleared it up and it seems to be reducing in frequency and intensity ... but yeah it's still around and that colchicine is not good stuff. I'm not really sure the allopurinol is helping or hurting just yet, or if diet is still an issue or not, or really resolved anything; something for the next dr visit. But apart from one day a week ago i've been mobile enough to live normally; although it's been cold, wet, and pretty bloody dull for the most part so it hasn't made much difference. At least the wet has cut the edge from the bitter cold so it feels like winter is finally on it's receding edge. Update 2: I went back to a doc and he took me off the allopurinol. That seems to have been keeping the gout going. So after a week or so its cleared up and i've not had an attack since. It's still a bit sore and not fully vanished but it's the best it's been for months and now i'm doing enough just to get sore from doing too much. I'm pretty much eating normally but i haven't tried grog yet.
So I don't really have much to say here but this is mostly just to add a "see, that's what happens" with regards to an apparent on-going problem with sourceforge.
I noticed a maintenance message a couple of times in the last few days and just put it down to being on the wrong side of the world as per usual; but it seems they've had some disk failures and restoring a site of that magnitude to full functionality isn't a trivial task.
Of course, the catch-cry is to use github, but that is also at the whim of hardware faults or just economics (as in the case of google code's demise), and savannah isn't immune to either. This also holds for blogger and wordpress and all these other centralised services, whether they be 'free-but-you-are-the-product' ones or paid services.
Not that I think the software i've been playing with has any intention to be the solution to this problem but decentralisation is an obvious answer to managing this risk. It may mean individual sites and projects are more susceptible to overload, failure, or even vanishing from history; but by being isolated it better preserves the totality of the culture represented in these sites and projects. Economically it may also be more expensive in total but as the cost is spread wider that concern just doesn't apply (parallelism and concurrency is wonderful like that).
I was already focusing on my current software project being 'anti-enterprise' - not in an economic or political sense but in an engineering sense - but events like this encourage me.
Intended to do nothing with the weekend ... but then i had "nothing better to do" so did a bit more hacking on the server. I had intended to go look for an updated distro for the xm, but promptly forgot all about it.
I did a bit of work on a `cvs-like' tool; to validate the revision system and to manage the data until I work out something better. The small amount I did on this exposed some bugs in some of the queries and let me experiment with some functions like history logging. The repository format is such that data updates are decoupled from metadata updates so for a history log they have to be interleaved together. I also came up with a solution for delete and other system flags: I already had an indexed 'keyword' set for each artifact so I just decided on using that together with single-character prefixes to classify them. Apart from these flags I will also use it for things like keywords, categories, cross-reference keys, and whatever else makes sense. System flags probably don't need indexing but it's probably not worth separating them out either. But the short of it is I can now mark a revision as deleted and it doesn't show up on a `checkout' of the branch containing that mark.
I did a bit of investigation into berkeley db je to see about some features I was interested in and along the way upgraded to the latest version (damn that thing doesn't sit still for long). This is also AGPL3 now which is nice - although it means I have to prepare a dist before I can switch anything on. Probably for now i'll stick with what I have; I was looking into having a writer process separate from the readers but I didn't get to reading very much about the HA setup before moving onto something else. It's just getting a bit ahead of where i'm at.
The driver of this is more thinking about about security than scalability. It's not really a priority as such; but its too easy to do stupid things with security and i'm trying to avoid any big mistakes.
So I had a look at what plain http can do and toward that vain implemented a chunk of RFC2617 digest authentication. This guy's code helped me get started so I could just skim-read the RFC to start with but eventually I had to dig a bit further into the details and came up with a more reusable and complete implementation. The main differences are requiring no external libraries by using javase stuff and the nonces are created randomly per-authentication and have a configurable timeout. It all works properly from a browser although nobody seems to use any http auth anymore; I presume it's all just done with cookies and if we're lucky some javascript now (and perhaps, or not, with ssl).
After I did all this I noticed the Authenticator class that can be plugged into the HttpContext and with not much work I embedded it into a DigestAuthenticator. Then made sure it will work free-threaded.
One problem with digest auth is that a hash of the password needs to be stored in plaintext. Although this means the password itself isn't exposed (since people often reuse them) this hashed value is itself used as the shared secret in the algorithm. And that means if this plaintext hash is accessed then this particular site is exposed (then again, if they can read it then it's already been completely exposed). Its something I can put in another process quite easily though.
I'm not sure if i'll even use it but at least I satisfied my curiosity and it's there if i want it.
Oh, along the way I (re)wrote some MIME header parsing stuff which I used here and will help with some stuff later. It's no camel but I don't need it to be.
On Sunday I thought i'd found a way to represent the revision database in a way that would simplify the queries ... but I was mistaken and since I still had nothing much better to do end up filling out some of the info implementation and html translator and found a way to approximately align the baselines of in-line maths expressions.
I wasn't going to bother with supporting the @'math{}'[sic] command in my texinfo backend, but a quick search found jlatexmath so I had a bit of a poke and decided to drop it in.
He's some examples as rendered in my browser:
The first of each row is rendered using jlatex math using Java2D, and the second is rendered by the browser (svg). They both have anti-aliasing via alpha and both render properly if the background colour is changed. Most of the blur differences seems to be down to a different idea of the origin for pixel centres although the browser is probably hinting too (ugh). But the png's are also only 4 bit as well; but they hold up pretty well and actually both formats look pretty decent. Alas the poor baseline alignment, but this is only a quick hack and not a complete typesetting system.
The SVG should at least scale a bit; unfortunately it tends to get out of alignment if you scale too much. Hinting maybe fucking it up?
When I did it I first I hacked up a really small bit of code which directly outputs SVG. It implements enough of a skeleton of a Graphics2D to support the TeXIcon.paintIcon() function. Only a small amount is needed to track the transform and write a string or a rectangle.
As an example, @math{E=mc^2} gets translated into this:
<svg xmlns="http://www.w3.org/2000/svg" width="65" height="18" version="1.1"> <text x="2.00" y="15.82" font-family="jlm_cmmi10" font-size="16.00">E</text> <text x="18.26" y="15.82" font-family="jlm_cmr10" font-size="16.00">=</text> <text x="35.14" y="15.82" font-family="jlm_cmmi10" font-size="16.00">m</text> <text x="49.19" y="15.82" font-family="jlm_cmmi10" font-size="16.00">c</text> <text x="56.12" y="9.22" font-family="jlm_cmr10" font-size="11.20">2</text> </svg>
There are some odd limitations with svg used this way, no alt tag or way to copy the image pixels is a pretty big pair of problems. So I also looked into inline PNG and since I was going to that much effort seeing how small I could make it by using a 4-bit image.
After a bit of poking around I worked out how to generate a 4-bit PNG with the correct alpha directly out of javase. I render to a normal 8-bit image and then copy the pixels over to a special 4-bit indexed image using get/setRGB(), and the ImageIO PNG writer writes it correctly. Rendering directly to the image doesn't work (wrong colour selection or something to do with the alpha channel), nor does image.createGraphics().writeImage(8bitimage), although a manual data elements write should and will be the eventual solution.
It makes for a compact image and in base64 the image is about the same size as the svg.
<img alt="e=mc^2" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEE AAAASBAMAAAD2w64vAAAAMFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAABaPxwLAAAAD3RSTlMAESIzRFVmd4iZqrvM3e5GKvWZAAAA+0lEQVR42mNgoAg wBhFSIcqRwMDAhE+F888GAip2MT4AWsXAwLbi1UfVpgvYlHCqQ4TzgEzshgQzQGyxYWD4i1UBy1k HmAqHf1hVWO0F+oUFqIiXweEAknjlGSGZh7H+TJMvzFaG2MLybTbYVTOBYAqQu65mZeOaLwxa1Wo wPVwLmByQjGDmCmA5wHCE4RiDEpgPMuMAwwFlhIq/LBeYLzDKMGsz3IM4l4FB8cA/Bt8JDJwTgNz fOQwM+oe5D7BuUHoP8xADg/0kBrYPDAzf06FCOodZDrBulAUG5S+wrQyak7R96zORHFK4zvQWi+E WNi7rC/iiRACMiAMA0lo9OMkFb4YAAAAASUVORK5CYII="></img>
FWIW this is how I create the image that I write to the PNG:
static BufferedImage createImage4(int w, int h) { int[] cmap = new int[16]; for (int i = 0; i < 16; i++) cmap[i] = (i + (i << 4)) << 24; IndexColorModel cm = new IndexColorModel(4, 16, cmap, 0, true, 0, DataBuffer.TYPE_BYTE); return new BufferedImage(w, h, BufferedImage.TYPE_BYTE_BINARY, cm); }
One might notice that the colour palette is actually all black and only the alpha changes - if a browser doesn't support the alpha colourmap then the image will be black. Bummer.
Wikipedia uses 4-bit png's for it's maths equations but I think it only has a transparent colour - and in any event they clearly only work if the background colour of the browser is white. Starting at fully white pages for 10+ hours pre day just burns your eyes out so I force my browser to roughtly amiga-console-grey because that's a colour someone actually thought about before using it. I think we can 'thank' microsoft for the brilliant white background in browsers as before IE they weren't so stupid to choose it as the default. White on black isn't much better either.
But as a result this is the sort of fucked up crap I get out of wikipedia unless I disable my style overrides:
I've started exporting it's pages to PDF so I can actually read them (using a customised mupdf which uses a grey background) but it's formatting leaves a little to be desired and if anything by making it appear like a paper it just emphasises any (of the many) shortcomings in the information content and presentation.
Pretty much any site with maths is pretty shit for that matter; everything from missing or low-quality 2-bit renders to fat javascript libraries that do the layout client-side. Dunno if this approach will be much better but I'm not going to need it very often anyway.
For various reasons from the weather to health to work I've been feeling pretty flat lately and it had me thinking about the past a bit. To think that i've gone from hand-coding raster interrupts and sprite multiplexors to writing information serving software "for fun" is pretty depressing. Computers used to be a much much more fun hobby.
I've been poking at the texinfo parser this week. I was hoping to do a quick-and-dirty parser of a sub-set of it but with a bit more ... but that bit more turns it into something a lot more complex.
The problem is that texinfo isn't a 'file format' as such; it's just a language built on tex. And tex is a sophisticated formatting language that can change input syntax on the fly amongst other possibilities. Unlike xml or sgml(html) there are no universal rules that apply to basic lexical tokens, let alone hierarchical structuring.
After many abortive attempts I think i've finally come up with a workable solution.
The objects in the parser state stack are the parsers themselves so it various the parsing and lexical analyis based on the current environment. Pseudo-environments are used for argument processing and so on. The lexical analyser provides multiple interfaces which allows each environment to switch analysis on the fly.
Error handling or recovery? Yeah no idea yet.
Streaming would be nice but I will leave that for another day and so far it dumps the result to a DOM-like structure. I could implement the W3 DOM interfaces but that's just too much work and not much use unless I wanted to process it as XML directly (which i don't).
I still need to fill out the solution a bit more but it's nice to have the foundation of the design sorted out. It's been a long time since I worked on trying to write a `decent' solution to a parser as normally a hack will suffice and i was pretty rusty on it.