Michael Zucchi
B.E. (Comp. Sys. Eng.)
also known as Zed
to his mates & enemies!
< notzed at gmail >
< fosstodon.org/@notzed >
After a few more hiccups I finally gave up on CentOS on my workstation last week - I don't mean to diss on the CentOS project and people as such, but I have to say I had a pretty negative experience with it and probably wont be considering it again (there just isn't any need). I hadn't tried it before, and thought i'd give it a go, mostly because I was expecting simple reliability.
Unfortunately not.
Now I don't know how much CentOS differs from RHL, but this doesn't leave a good impression of that either. The whole point of using old software with limited packages is for stability and reliability and it was on those two particular points that I had the most negative experience. Probably the worst I've had with any OS since automated package management existed (~10 years).
I've moved to Fedora 11 and have had no issues since. Faster, more stable, more packages, blah blah blah.
Well, while i'm dissing on free software projects ... I don't know what they're doing wrong but OpenJDK has some major performance issues that simply make Java (and anything that uses it) look really bad ( in all fairness I am using Fedora 11, so it is certainly quite an old release ... ). I am doing some stuff in netbeans (6.8) and although usable for the most part, had a decidedly lurcherous gait which only got worse over time till it was barely usable. I resorted to using emacs for editing (it's still a much better basic editor), jumping back to netbeans only to look up methods (JavaDocs as useful as they are are a bit unwieldy to navigate, particularly in a web browser). I suspect it has more to do with something basic like the interaction with X than anything else - and we all know what a mess of The X Windows System that f.d.o are making so it probably isn't entirely `their fault'.
So I finally got sick of that and after a bit of searching on the net and trying "-Dsun.java2d.pmoffscreen=false" to no big effect, the only thing seemed to be to try the Sun JDK. *Cough* Oracle JDK. Apart from a different way of selecting fonts that initially made it look a bit crap, and a generally lighter appearance in the text you'd think I just upgraded the computer to see the difference.
It's like night and day. It might use scads more memory than a C app but it feels just as snappy and responsive (at least in comparison to how it was, hard to guage). The thing is, having such a poor user experience from the OpenJDK just continues to give Java a bad name when it just isn't warranted, and hasn't been for a long time now. I was originally just going to implement a GUI in Java (or maybe just a prototype) but I might try a few tests of the algorithms to see how it compares to the C code - it spends most of it's time in FFTW anyway at present. It's a real pity that it doesn't support a primitive complex type, but that isn't the end of the world. I'm using the Shared Scientific Toolkit at the moment, although mostly just for the arrays and fft ops (that's all the code really needs).
Oh god what is this pain in my head ... yet another messed up build system. One that provides all the craptascialness of Ant for your C++ projects too! Jesus, who comes up with this shit? I had some issues building SST - it turned out I just needed to install the static FFTW library, but along the way I had my first experience of CMake. About the only thing going for it is that it doesn't use XML. What I don't understand is why these things come along and break the whole point of make files? During the build something failed - so I worked out how to manually run the command and the command ran ok. Fine then, just run make and let it continue ... oh no, that would be too fucking simple wouldnt it? It uses some fucked up meta-system for tracking build dependencies, so it just re-builds the whole directory again, again failing on the final link. Then I ran a different make target which built ok ... but wasn't quite what I needed. No problem, try the original build target ... 'target up to date' ... huh?
THIS IS TOTAL SNOT!
The one and only point for any make system to exist in the first place is to guarantee your builds are consistent without having to recompile everything every time. For everything else you may as well just use shell scripts - which are conveniently embedded inside a Makefile. I understand CMake does some other stuff, and also provides a `shell script' environment that supports broken operating systems, and that idea has some merit - should you wish to interact with such a broken system at least - but if it can't get the basics right what use is it?
Ant is another nightmare all of its own - not only does it not do any dependency checking whatsoever (one of the most critical features of any build system), simple human readable shell scripting is replaced not only by a disastrous and unparsable XML scripting language but also by dynamically loaded Java modules! I can understand wanting to use your favourite language for everything, but Java is not a scripting language and sometimes there is a right tool for the job.
completely broken ...
... simply worthless.
... CMake ... Ant ...I consider any tool where you must occasionally `make clean' to get a reliable build to be completely broken and simply worthless. CMake seems to get around this absurdity by abusing 'ccache' - another painful bit of kit that shouldn't need to exist (the limited use-cases for which ccache provides any service can mostly be done in other much simpler ways - assuming you have a reliable dependency mechanism in the first place). And Ant gets around this by the fact the java compiler is quite fast anyway, and already does the dependency checking - but only for the Java sources, and any real project has to do more than just compile objects and link them together. One often has to 'clean and rebuild' in all the GUI IDEs i've used, but there should be no reason to ever need this unless you're manipulating files outside of it.
What I find disappointing (vexatious, alarming, and upsetting too) is that people refuse to learn a basic reliable, flexible tool, and then come up with their own which doesn't even solve the original problem. `I can't use an editor to control the tabs in my file' is really an idiotic and puerile argument (if you do think that, then yes i am in fact calling you an idiot. Idiot.) I suspect if they understood the original problem they wouldn't have bothered to inflict all this crap upon themselves or the rest of us in the first place and wouldn't the world would be a better place for that?
I understand some people think make is a bit arcane .. but it isn't. XML is the definition of arcane - lets design a format which makes it easy to write parsers, for developers. You know that tiny bit of code that gets written once and forgotten about because nobody fucking cares once it works. It's not like they even got that bit right - the parsers are large and complex and the code you write using them ends up being large and complex too (and it's all slow). But the real winner is that this then inflicts all the nitty-gritty which makes it 'easy to parse' onto the user for whom it is decidedly not easy to parse, or write, or even design. Now that's arcane.
But I digress.
The auto*tools are a big bit of nasty, but at least it's only one (perhaps broken) system to learn, not 'n' (definitely broken) systems where 'n' is monotonically increasing. And there are plenty of application projects for which it shouldn't be needed (but gets used anyway, sigh).
Shit, I was supposed to have some bricks delivered this morning - I was up way too late and got little sleep worrying about making sure I tell them to put it in the driveway to avoid 3 hours of back breaking work moving them. I suppose i should go get some groceries, or I might just avoid the shops which turn into a nightmare of panic buying just before Easter since many shops are shut - GOOD HEAVENS - for 2 whole days in a row. How ever will we all survive for so long without being able to buy more shit!
Well, I guess that's really the last nail in the coffin for CELL.
Sony's just announced that the next firmware `upgrade' for the PS3 will drop Linux support (and it's so important, that's all it will do). This is very, very disappointing. They blame it on crackers or 'security', but it's obvious it is just a cost cutting exercise. Sony have been hurting financially for a while now, and the razor gang is out with their daggers looking for savings.
After the `ps3 slim' dropping support (due supposedly to lack of resources to write the hypervisor drivers), and then IBM dropping CELL for HPC ... I guess the writing was on the wall. I'm glad I gave up development on CELL BE some time ago and got hold of a beagleboard instead - overall it's been a more satisfying experience if only because things are simpler. The whole CELL thing was just a costly mistake for all by the looks of it - being a bit ahead of it's time lead to a few limitations that people couldn't cope with.
Even though I rarely use it anymore, the whole thing plainly stinks - this is not a device I rent, I bought it. And for them to come into my home and remove functionality (advertised on the box no less) from a device that I paid for in full (well over-paid) should simply be illegal, if it isn't already.
I can't even log onto the Sony blog to fruitlessly whine about it because they've changed the login system to some horrid mess that takes ages to load and only shows blank pages (I bet it works on ie6 though, if the comments in the page are anything to go by). Well if they don't want me as a customer it isn't really my loss is it?
Time to remove CELL BE from the subtitle of this page at least; not that it has had much point in being there for quite some time.
Hmm, that was frustrating.
Have been trying to write a `kernel' boot header - one that sets the MMU up for the kernel to execute at another address (0xC0000000) and then jumps to it. Been very tired from sleeping poorly and a bit brain-dead after work so I haven't been really switched on, but it's been dragging on so much I was about to give up (well not really, but it felt like I should).
Apart from a few little bugs, i was using the wrong TEXCB/AP flags for the level 2 page entry for devices ... but I don't know why it's wrong. It seems to check out in the manual, but for whatever reason it just crashes the code (FWIW I was using 0xb2 - 'non sharable device, rw everyone' rather than '0x16' 'sharable device, rw supervisor only). Blah. One little number change and now it works. $@%!$#
I plan to use the two translation table mode, which means the system memory will start at 0x80000000 - so it may make sense to just identity map the kernel at that address. But for now the memory map will have the kernel at 0xc0000000, and i'll start shared libraries or something else at 0x80000000.
So here it is ... in hindsight I may have done things in the wrong order, but this way makes things easy. I set aside some memory in the BSS section for the page tables and let the linker manage allocating space for them, also for the I/O devices - although this means a couple of physical pages are lost at present.
There is a few little `tricks' that I use so the code is position independent, although there are possibly better ways to do it. The init code has to be position independent because the linker script is set up so that all the code starts from the same virtual address - it could be done otherwise, but then I would need an ELF loader to relocate the image - which is somewhat more work.
_start: adr r12,_start @ this will be physical load address mov sp,r12 push { r0 - r3 }
First I just setup r12 and the stack to point to our load address - which is 0x80008000 as set by the linker script. This gives the code a fixed location from which to calculate physical and virtual addresses. The incoming arguments are saved too - although nothing uses them yet (das u-boot can pass in arguments or information about modules or filesystems it preloaded into memory).
ldr r1,bss_offset ldr r2,bss_offset+4 add r1,r1,r12 add r2,r2,r12 mov r0,#0 1: str r0,[r1],#4 cmp r1,r2 blo 1b
Clear the BSS - the code reads a relative offset that the linker creates, that indicates where the BSS starts and stops, and then uses r12 to map that to the physical address. The ldr r1,bss_offset
is assembled into a pc-relative instruction so will work no-matter where it's loaded.Then there is a loop which uses a table to initialise the page tables. I first need to find the space within the BSS where it is stored, and then iterate through the entries. Each range is defined by a virtual target address, a start offset relative to _start
, a virtual end address, and the `small page' flags for the pages.
ldr r11,ttb_offset add r11,r12 @ physical address of kernel_ttb add r10,r11,#16384 @ same for kernel_pages adr r9,ttb_map mov r8,#ttb_size 1: ldm r9!, { r4, r5, r6, r7 } @ virtual dest, start offset, virtual end, flags add r5,r12 @ physical address 2: mov r3,r4,lsr #20 ldr r2,[r11, r3, lsl #2] cmp r2,#0
If the l2 page isn't set yet, then just allocate one and update the l1 entry.
moveq r2,r10 addeq r10,#1024 orreq r2,#1 streq r2,[r11, r3, lsl #2]
Form and store the l2 page table entry.
bic r2,#0xff @ r2 = physical address of l2 page mov r1,r4,lsr #12 and r1,#0xff orr r0,r5,r7 str r0,[r2, r1, lsl #2]
And then loop for all the pages and all the entries in the table. Here I compare for equality for the end address - I do this so I could map the last page of memory if I wanted to. But currently I don't use this.
add r4,#4096 add r5,#4096 cmp r4,r6 bne 2b subs r8,#1 bne 1b
That's really the meat of it - the table has the smarts in it, and uses the linker to create the interesting values required.Then it just turns on the MMU - this could probably be simplified as I can just enforce the state I want (i.e. don't bother preserving bits). Putting 1 in CP15_TTBCR
means that two page tables are used, the TTBR1 table is used for any address with the top bit set (i.e. >= 0x80000000).
mrc 15, 0, r0, CP15_SCTLR bic r0,#SCTLR_ICACHE bic r0,#SCTLR_AFE | SCTLR_TRE | SCTLR_DCACHE | SCTLR_MMUEN mcr p15, 0, r0, CP15_SCTLR mov r0,#0 mov r1,#1 mcr p15, 0, r0, CP15_TLBIALL mcr p15, 0, r1, CP15_TTBCR @ Top 2G uses TTBR1 mcr p15, 0, r11, CP15_TTBR0 mcr p15, 0, r11, CP15_TTBR1 mcr p15, 0, r0, CP15_TLBIALL sub r0,#1 mcr p15, 0, r0, CP15_DACR pop { r0 - r3 } mrc 15, 0, r8, CP15_SCTLR orr r8,#SCTLR_MMUEN mcr p15, 0, r8, CP15_SCTLR
This last instruction turns the MMU on (and will probably eventually turn on the caches/etc). The input arguments are restored before turning on the MMU since the stack memory will no longer be valid or mapped (actually I should probably map the same 32K to the system stack wherever I decide to put that). The CPU now flushes the pipeline and starts executing instructions from the current pc - but with the MMU on. Because of this the code has to ensure this instruction is still mapped to the same address otherwise it's a one-way trip to la-la land.In this case the ldr pc,=vstart
will force the assembler to generate a constant load from the constant pool (via a pc-relative load). The linker will set this constant up to point to the virtual address properly.
ldr pc, =vstart
Now come the relative offsets used to locate the BSS range, as well as the page table memory from within BSS.
bss_offset: .word __bss_start__ - _start .word __bss_end__ - _start ttb_offset: .word kernel_ttb - _start
And then the important stuff - the page table mapping descriptions. Rather than store the 'virtual end' address it could probably store the length of the address range, but so long as they are aligned properly it doesn't really make much difference. Note that even with the relative addresses any range in memory can be accessed using the simple arithmetic that the linker supports.
ttb_map: @ this page, so mmu can be enabled .word LOADADDR, 0, LOADADDR + start_sizeof, CODE @ kernel text at virt address .word __executable_start, 0, __data_start__, CODE @ kernel data .word __data_start__, __data_start__-_start, __bss_end__,DATA @ system stack, 32K, 4K from end of memory .word 0 - 32768 - 4096, 0x8000000 - LOADADDR, 0-4096, DATA @ i/o of gpio, for debug too (LEDs!) .word GPIO5, 0x49056000 - LOADADDR, GPIO5+4096, NDEV @ do serial port too, for debug stuff .word UART3, 0x49020000 - LOADADDR, UART3+4096, NDEV .set ttb_size, (. - ttb_map) / 16 .ltorg
The .ltorg ensures the constant pool is stored at this point, so we can guarantee they are within the one page which needs to be identity mapped immediately after turning on the MMU.
vstart: ldr sp,=-4096 @ init stack @ bl __libc_init_array @ static intialisers mov r8,#(0xf<<20) @ enable NEON coprocessor access (still off though) mcr p15, 0, r8, c1, c0, 2 b main
And this is the 'virtual address' entry point. This could just occur immediately after the setup code, but separating it makes it more obvious it's separated. About the only necessary setup is the (system) stack pointer. I was going to place this at the end of the virtual memory but having it one page back protects from stack underflow as well.
And finally there is the size of this code, and the BSS which stores the bare minimum so I can set it up and see it works (i.e. the UART or blink the LEDs).
.set start_sizeof, ((. - _start)+4095) & 0xfffff000 .bss .balign 16384 .global kernel_ttb, kernel_pages, UART3 kernel_ttb: .skip 16384 kernel_pages: .skip 1024*32 GPIO5: .skip 4096 UART3: .skip 4096
And ... it's done. Phew.
Unfortunately this means all my 'library code' that uses fixed physical addresses wont work any more, including the debug printing stuff. But that's something to worry about later.
One goal I had was that code isn't just setting up the page table to be thrown away later - this is sufficient to remain the kernel page table forever. Either for a supervisor level kernel process/threads, or for in this case as the `system page table' which is used for any address above 0x80000000. It still needs a little tweaking - the page table should be write-through cache-able for instance - but now it works I can worry about the details. Well now hopefully I can move on to more interesting things.
For work I have been playing with a few things of some interest. I thought I needed a function that could interpolate a set of values spread across an arbitrary 2d plane into a grid of values. I came across this interesting implementation of Thin Plate Splines which seemed to do the job. Unfortunately it turned out that I needed to interpolate more values than is practical with this algorithm (it does it, it just takes too long), and I can probably just force the values to be in a grid anyway so I can use much simpler methods. But still, this is an interesting algorithm to have in the toolkit and it produces pleasant looking results. Interestingly I found the C++ 'ludecomposition' code too messy to convert to C (i'm using different data structures) and just used the Java one it references as a starting point instead. It was much more C-like and translated in a very straightforward manner.
So I wrote a basic bicubic interpolater - the code uses bilinear at the moment although in an inconsistent way which doesn't really work since values can be missing. I was hoping bicubic would be a more natural fit for what it is doing, and worry about the missing values later. Unfortunately it doesn't seem to help much - the input data is just too noisy/inconsistent so I guess there is more to fix first (sorry this doesn't make much sense, I can't really say what it's trying to do).
I have some photo's of the progress on the retaining walll but i'm too lazy to put them up today. I got some ag-pipe on the weekend, so I'm just about ready to back-fill at least some of the wall (i don't think I have enough gravel to do the whole lot, but i'll see), although I'm not sure where to run it - and an outlet mid-way along the wall i've already laid will be a bastard! I was going to have it coming out the ends but now i'm not so sure. I need to decide so I can get the right fittings too (which for some reason are rather expensive for what they are).
Boral are having a sale on bricks and whatnot this week so I went and ordered another pile of retaining wall blocks (40% off makes it worth it, even if I don't need them for a while). I wasn't really sure how many I needed to start with, and I used a lot more than I thought originally (just the main wall uses most of them). I have a better plan on what I want to end up with now, so hopefully I got it right ... I guess I can always put them around trees or something if I have too many, or create a lower wall if I don't have enough.
Since I wont need to use them for a while i'm going to try to get them delivered into the driveway - so I don't have to move them off the verge by hand. So today I also moved the rest of the roadbase off of the drive-way to a pile out the back. Unfortunately I overloaded my cheap wheelbarrow and it turned over and I bent the handle (well it was only $60), but it's still usable. If I get stuck into finishing off the walls around the paving area it will get used up pretty fast anyway - of the 3 tons I probably have under 1 left. I'll get the bricks before easter, so it could be a very long long weekend if I get stuck into it ...
The Haiku mailing list is the strangest list I ever did subscribe to.
This post from Justin is completely super-fantastalistic. Very good read, if completely off-topic.
Still, this thread was basically the last straw for me at the moment on the Haiku list (but not enough on it's own) ... I guess I may return one day though
Update 1/4/2010. I've edited this post to change the tone a bit - I shouldn't have posted intoxicated as usual, or at the insane hour of the morning I did either, IIRC.
Read an enjoy.
from Justin x
Islam is the same way in it's treatment of Kuffar etc... death to apostates (as Sharia law demands that in a Muslim country under Sharia law if you say that you are no longer a Muslim the punishment is to be killed. How enlightened and civilized!), conversion or death for non-Muslims etc... not to mention that Islam specifically denies the divinity of Christ, who for Christians IS God etc... so that COMPLETELY denies a shared belief in the same God... as they're just two absurd variations on an older myth... just like Mormons in the US today are an even more recent absurd fallacious re-invention of older mythology... exactly as what Judaism was to the pagan polytheistic pre-monotheistic god of Moses... what Christian was to Judaism... what Islam was to Christianity... what Mormonism is to Christianity... just newer and newer absurd twists on an already absurd mythology invented by painfully ignorant barbarians who knew less about the natural world than small children know today.
Not to mention that when it comes down to it any one of the followers of any of these faiths INHERENTLY denies the validity of the others... but is too stupid to actually comprehend that the very reasons they deny all those other religions are the exact same reasons that apply to their own stupid fairytale. They just can't put two and two together because they're brainwashed, compartmentalized, idiots who utterly lack the ability to apply logic, reason, critical assessment etc to their own beliefs even when they do use them to dismiss other beliefs.
And to top that off, again, you have people like this who smile and act like it's some happy thing that you believe in the same stupid fairytale in spite of the fact that in reality you DENY the validity of their belief and their different interpretation on a completely fabricated mythology.
So when I see grown adults behaving like idiots who believe in imaginary gods invented centuries or millennia ago by ignorant bronze age tribesmen etc... and then ignoring what the very holy scriptures they're tittering about actually say etc... and end up just patting each other on the back because their mutually shared delusion stems from the same older absurd mythology... and that they have in common that ignorant and irrational belief in the supernatural etc...
I honestly didn't expect to run across so many idiots on a mailing list for a not well known, in-development, operating system.
And for the record I actually hate Satanists... I made the earlier post just to illustrate the principle of people talking about a pretty stupid topic on a mailing list that other people might find offensive for its stupidity and implications. The reality is that you can't even legally reprint the Satanic Bible because it's still a copyrighted work, having only been written in 1969 etc... but I'd wager that probably nobody in this drooling bunch of idiots actually knows much of anything about the actual Satanic Bible... or the fact that true Satanists who are actual members of the Church of Satan DON'T ACTUALLY BELIEVE IN THE SUPERNATURAL AT ALL. They don't believe in a literal being called Satan and merely refer to him as a symbol of rebellion, self importance, etc... but pretty much not a single religious person I have met knows any of that because they actually do believe in make believe mythological beings and invisible all seeing men etc... and they spend so much time lying to each other about the boogieman that they never bother to actually educate themselves not only about the actual REAL history of their own religions and what their own scriptures actually say and command them to do etc... but certainly not anything that might ever, in any way, pose a threat to their delicate bubble of profound closed minded IDIOCY.
And I say all this as a person who was a Christian and youth minister for over 20 years.
I hate you people for being so damn stupid and for publicly blabbering about it like it's something to be proud of... and for having the false impression that you have some right to be exempt from being ridiculed as the idiots you actually are. That people like me need to sit and watch you publicly make asses of yourselves and seek to get together to try to bring your idiot cult and its literature to a new OS to try to share and spread your profound retardedness and brainwashing to future generations.
Like try this on for size you morons...
We know for a FACT that the Earth was not created 6,000 years ago... much less the whole universe. We now FOR A FACT that humanity did not suddenly appear 6,000 years ago in a magic garden in Mesopotamia. And because we know FOR A FACT that claim is false... we know that Adam and Eve weren't there strolling around and talking to a talking serpent either. Nor were there magic fruit trees that they ate from BECAUSE THEY DIDN'T EXIST. And because we know that, we know that they didn't eat that fruit and commit the original sin because THEY WEREN'T THERE. Thus there was nothing for the made up God to be mad about.. nothing for him to create himself as his son to forgive us from because it never happened because they were never there.
Can your tiny minds follow that train of logic?
http://en.wikipedia.org/wiki/Age_of_the_Earth
http://en.wikipedia.org/wiki/Human_evolutionhttp://en.wikipedia.org/wiki/Timeline_of_human_evolution
http://en.wikipedia.org/wiki/Human_mitochondrial_DNA_haplogroupBy the very same logic and facts, we know that Noah's Ark NEVER HAPPENED. Not only is it laughably absurd from a physical and logistical standpoint... but that much water has never in the history of this planet existed here on Earth. And had it actually rained that much (not to mention the laughably stupid claim that the bible actually makes about the heavens, the firmament, actually being a mechanical dome over head to hold the literal ocean above us up from falling on us... in which god opened up mechanical floodgates to let all the water pour down to flood the Earth and then made it all evaporate... but I digress...) had it actually rained that much and then evaporated in that span of time it would have saturated the entire atmosphere up into the stratosphere to over 100% and blocked out the sun completely and frozen the entire Earth wiping out life as we know it... and this all supposedly happened in the last 6,000 years... you know when the Egyptians had a kingdom etc... and these other civilizations failed to notice a world wide flood and subsequent world encompassing life obliterating ice age etc... IT'S LAUGHABLE at how absolutely detached from reality you people are that believe this tripe... and these weren't figurative stories either... as we know from a wealth of contemporary archaeological evidence that the people of those times LITERALLY BELIEVED these things because they were IGNORANT and knew essentially nothing about the natural world so they made up answers as best they could to describe the world around them and took it as fact... such as the fact that up until a few centuries before Christ they all still believed that the Earth was a circular flat disk. They LITERALLY believed it because the earth looked flat to them. All the older Old Testament books in the bible are written from that perspective and contain references to the Earth being flat etc.
But I digress...
When people are faced with reality and facts like this which threaten their stupidity bubble, they jump to make Ad Hominem personal attacks... to argue style over substance... to ignore the burden of proof for their ridiculous claims... to use special pleading to try to claim exemptions for their own idiotic beliefs that they don't allow to be applied to anything else... they erroneously and without any evidence presuppose the validity of their wrongheaded mythology and then use that unsupported presupposition to then fallaciously dismiss facts and actual solid real world evidence that disproves their claims.
They hem and haw, they try to claim Pascal's Wager... they intentionally (or simply out of ignorance or stupidity) confuse general deism with faith in a specific dogma such as Christianity etc... kind of like you idiots are doing giggling about how you share the same God etc... completely failing to recognize the very mutually exclusive and specific claims made by your various faiths etc... (and such fundamental schisms also exist WITHIN the faiths... Sunni and Shia... Catholic and Protestant... and people still MURDER EACH OTHER TODAY over these stupid and worthless disagreements over IMAGINARY SHIT.)
I could go on and on.
You people are IDIOTS and I resent you for bringing your idiocy to this list.
I would expect that people who are interested in an operating system such as this would be a little more educated than average... a little more information savvy etc... that people would be capable of pulling their own heads out of their own asses and seeing the absurdity and utter lack of knowledge they have about the world and their own beliefs as a part of that very real world etc.
THINK.
PLEASE.
Good day.
- Hide quoted text -On Sun, Mar 21, 2010 at 11:20 AM, Mattia TristoVery nice, i think we belive in the same God.
Good work to you and cheers
Mattia Tristo 2010/3/18 Nicholas OtleyHi Mattia,
I can't help you on the Bible, but I'm planning on developing The Holy Qur'an and Nahj al-Balaghah for Haiku (Arabic and English).
Take care,
Nik On 18 March 2010 18:27, Mattia TristoHere in the italian mailing list we are discussing about developing the Holy Bible for Beos/Zeta/Haiku, there is someone who want to develop it in English too?
Hope don't wasting your time, let me say best regards to you
Mattia TristoHere's some good news - BeagleBoard.org has been accepted for the Google Summer of Code programme. At the request of the administrator Jason Kridner i've also signed up as a possible mentor.
There is a gsoc group, a page of project ideas, and there's also there's the list of existing project ideas.
I'm not really sure which projects I would like to mentor given the opportunity, but there are a few that piqued my interest:
But there's a ton of other interesting projects in there.
A couple of interesting posts I came across this evening:
Showing examples of how direct goals such as 'increasing shareholder value' are the least successful way to achieve it. Food for thought in this crazy modern world where CEO bonuses and shareholder returns are the the sole quantification of success of a company or industry. Interesting in light of globalisation, out-sourcing, patenting of everything from ideas to seeds to animals, and so on and so forth that all put private profit before society.
A pretty depressing look into the crystal ball for the next decade or so. I don't think Australia for one realises how fragile it's current `resource-driven' aka `lucky-country' or rather, `stupid-country' economy really is (as opposed to 'the clever country' era which gave the me the opportunity to go through Uni). A little god-bothery at the end, but don't let that put you off, Jesse always has lots of interesting things to say.
Both about a new report from the UK Department of Defence about the world-wide strategic threat environment in 30 years time. As The Guardian's sub-heading says, very grim reading. It's all about the population really; I can't imagine any palatable political solution, and any trend line with no limit has no scientific solution either. In for a bumpy ride even if these glimpses of the future are out by a few years. Although at my age I may just escape it through natural means.
All very depressing. I don't think it helped that I was sort-of watching a pretty grim tv series 'The Survivors' at the same time either. Although I don't think the writers there can do maths terribly well - a 90% reduction in population would still leave a sizeable number of people in Britain - over 5 million. Not one dozen, and even raw meat takes more than a few hours to go off on the nose in a disconnected fridge! Hmm all this reminds me for some reason about a book I read a few years ago called `The Genocides' that left me deeply disturbed for at least a week. It just a sadistic device of the author simply give the reader un unpleasant time and lasting memory. It is a grotesque tale with no redeeming features whatsoever. Avoid.
Well, on to more mundane and thankfully lighter topics. This afternoon I finally got off my fat bum and spent 4-5 hours working on the main retaining wall. Managed to finish the bottom layer and stacked another two on top since I was on a run - I still need some ag-pipe before I can back-fill each layer with gravel, so I should try to get that early this week so I can get most of the work just out of the way at last. I was going to walk it from the plumbing shop - it's only about 1km away. Although I may have stuffed up a bit - i'm not sure where the ag-pipe should drain to, and I didn't leave any drainage holes along the 12m of the run. It can still come out of the ends though, so maybe i'll just do that (unlikely i'll move all those damn blocks again, particularly the bottom row). I also re-cut the stair-case base to make the steps deeper, although I can't fill that in till I have that ag-pipe.
I measured the height along the length once I finished it rises a bit in the middle unfortunately - about 8mm, but at least in a relatively smooth curve. Yes yes, stupid I know - I should have regularly measured, but I did have another string-line at the brick level and used a couple of spirit levels, but all those little `close enough fuckit's add up. But it doesn't really matter because it only goes out `significantly' where the stair-case is, which is a natural break in the line. I guess i'll have to see how much it settles over time too. My $5 rubber mallet disintegrated before the last few base bricks were done, but I managed to tap down and level the last few with a bit of wood and a heavy ball-peen hammer - pretty well over it all by that stage, but I think I didn't drop off too much in finishing quality. Knowing my luck it will be a bit out compared to the boundary (and to-be-shed-wall), but that should only affect the aesthetics unless I made a really big mistake.
The biltong turned out more or less ok. I've been sampling bits as I went along to see how it 'aged', and the first bits were a bit young (they were also thinner). The flavour didn't really settle till today, 6 days in - once the vinegar had dissipated. I made a slight variation for two sets of strips. One I rinsed in water and vinegar to remove the excess salt, as according to one recipe, and the other I left as is in an attempt to keep more flavour in-tact, according to another. The vinegar one ended up pretty bland, so i'm not sure that's the right idea. But the other one is a bit salty - ok with a beer in hand or after a long hot day in the sun, but not fantastic. Since I was caught up with some other things I also salted it for about 15 hours, so maybe next time I'll just try a shorter seasoning session. Although I imagine most of the salt would still be clinging to the outside once it is hung up. Need a way to get more chilli flavour on it too - maybe some sambal pedas as a final stage. Still, one bit I tried today I'd label a success, even with the saltiness. Almost has a touch of salami flavour to it, mixed with bbq steak, a bit more chilli and less salt and it could be a winner. 1kg of well-trimmed lean porterhouse made about 400g of biltong; so it's pretty cheap to make. Will have to investigate a better drying cabinet than a house-moving box with a shade-cloth cover though, at least if it becomes a regular task.
I'm a bit ahead of schedule on my work and I am awaiting some feedback so I thought i'd look at what SSE2 could do to speed up tiny bits of what it does. Might not need to use it, but it'd be good to have some sort of feel for what it can do - e.g on CELL it can make an order of magnitude difference. Some interesting and surprising results.As a quick aside, while searching for why using an SSE intrinsic kept complaining about assigning to the wrong data-type (turned out it was just a typo and gcc didn't complain about the undefined `function') I came across this interesting comparison of SSE optimisations on common compilers. Actually I was quite surprised that gcc was doing quite so much with constant subexpression elimination of vector quantities - I thought intrinsics were pretty much in-line assembly with register aliases. Anyway good to see it is well ahead of the pack in all departments.First, I tried implementing the following function:
X = abs(Z)*exp(Y*i)Z, X complex, Y real
The C code is all pretty straightforward:
Xrow[x] = cabsf(Zrow[x]) * cexpf(Yrow[x]*I);
The SSE code was a bit messier, but for such a simple function quite manageable. I found a nice library of SSE optimised transcendental functions needed to implement a vector cexpf(), and the Cephes maths library was an excellent reference for the internal workings of functions like cexp - e.g. so I could remove redundant work.
v4sf d0, d1; v4sf i, r; v4sf absz; v4sf rr, ri; v4sf ci, si; d0 = Zrow[x]; d1 = Zrow[x+1]; i = _mm_shuffle_ps(d0, d1, 0x88); r = _mm_shuffle_ps(d0, d1, 0xDD); absz = _mm_sqrt_ps(r*r + i*i); i = Yrow[x/2]; sincos_ps(i, &si, &ci); rr = absh*ci; ri = absh*si; d0 = _mm_unpacklo_ps(ri, rr); d1 = _mm_unpackhi_ps(ri, rr); Xrow[x] = d0; Xrow[x+1] = d0;
I'm sure it's not the prettiest code, and it may not even work (not sure my shuffles are correct), but assuming it is otherwise correct, it'll do. I also tried a double version for simple C, and the above sse version with 1 unrolled loop.
I have a number of machines to try it on, a pentium-m core-duo-wtf-its-called, athlon 64 x2, and a phenom II 945. I'm using gcc 4.4.3. Running the loop 10 000 times over a 64x64 array using constant loop counts. I'm using the aggressive optimisation flags suggested by AMD, including -march=amdfam10 and -funroll-all-loops, and also c99 features like restrict.
Anyway I'll keep this brief. I'm not so much interested in the various CPU's as the relative differences within them.
Code A64-X2 Phenom II Pentium-M C single 1.16 0.84 1.17 SSE single 0.69 0.31 0.37 SSE unrolled 0.66 0.31 0.37 C double 1.36 0.90 1.25
Well it's certainly an improvement, but nothing like on Cell. The Athlon-64 X2 really suffers from a 64-bit internal data-path, so the SSE is not even 2x faster, even though it's calculating 4x values at once for most operations. The intel shows the biggest improvement using SSE at just over 3x, but the Phenom is about the same. gcc has much improved since I tried similar things on the SPU so perhaps the non-sse code is 'just that good', but I can't help but imagine there is more silicon dedicated to making non-sse code run faster too. Oh and it is interesting that double's are pretty much on-par with singles, at least in this case. I was hoping that moving to singles would make a bigger difference - too lazy at this point to try a double SSE3 version (i'd have to write sincos_pd for example).
I tried this first because I thought the a little complexity (yet not too time consuming to write) would help the compiler do more optimisations, and it is complex enough it can't be memory constrained. I couldn't get gcc to inline the sincos_ps, but I doubt that would've made much difference.
Then I tried something much simpler, just adding two 2d arrays. I didn't use sse for this, but just used the vector types. And just looped over every element of every row - I used a stride calculation rather than treating it as completely linear. Hmm, perhaps I should have used non-constant dimensions to see what the compiler did with more general purpose code.
a[x] += b[x];
gcc unrolled the whole row-loop so things are about as simple as they get. Actually gcc vectorised the version using simple C types as well (well if it couldn't vectorise this loop it'd be a worry), and then it included the two versions: one vectorised for when the addresses are aligned appropriately (which they are), and one using the FPU for otherwise. Looking at x86 assembly gives me a headache so I didn't look too closely though. Since this loop is so simple I had to run it 1 000 000 times to get a reasonable timing.
Code A64-X2 Phenom II Pentium-M C single 8.5 1.1 2.1 C vector single 6.0 0.98 1.2
Ok, so it's tuned for the phenom, but wtf is going on with the athlon64? Memory speed constrained? I presume they are all constrained by the L2 cache, or I might even be getting cache thrashing or something (this is the sort of thing you could guarantee on cell ... huge pity IBM aren't progressing it for HPC). Don't ask me why the pentim-m version is so different - afaict from the source they're both using SSE2 code since the memory should be aligned properly. *shrug*.
So after all that i'm not sure i'm much wiser. The question being - is it worth the effort to manually vectorise the code or even use sse intrinsics on an x86 cpu, at least for the algorithms i'm using? Hmm ... probably ... even a 2x increase on a minor operation is better than nothing, but it'll only be an incremental improvement overall- but at least in every case it was an improvement. These are all using 32-bit operating systems too, I wonder if I should try a 64-bit one. Well i'll see how much time I have to play with it.
Should really try GPGPU and/or OpenCL simply to get a good 'feel' of what to expect.