About Me
Michael Zucchi
B.E. (Comp. Sys. Eng.)
also known as Zed
to his mates & enemies!
< notzed at gmail >
< fosstodon.org/@notzed >
parallella ezesdk 0.3(.1)
Just uploaded another ezesdk snapshot as ezesdk 0.3 to it's home page.
Didn't do much checking but it has all the recent stuff i've been playing with from auto-resource referencing to the fft code.
Update: Obviously didn't do enough checking, the fft sample was broken. I just uploaded 0.3.1 to replace it.
FFT(N=2^20) parallella/epiphany, preliminary results
I finally (finally) got the fft routine running on the epiphany. It's been a bit of a journey mostly because each time I made a change I broke something and some of the logarithmic maths is a bit fiddly and I kept trying to do it in my head. And then of course was the days-long sojourn into calculating sine and cosine and other low-level issues. I just hope the base implementation I started with (radix-2) was correct.
At this point i'm doing synchronous DMA for the transfers and I think the memory performance is perhaps a little bit better than I thought from a previous post about bandwidth. But the numbers still suggest the epiphany is a bit of an fpu monster which has an impedance mismatch to the memory subsystem.
The code is under 2.5K in total and i've placed the code and stack into the first bank with 3 banks remaining for data and twiddle factors.
First the timing results (all times in seconds):
rows columns noop fft ffts radix-2
1 1 0.234 0.600
2 1 0.379
1 2 0.353
4 1 0.195 0.302
1 4 0.233 0.276
2 2 0.189 0.246
4 4 0.188 0.214
arm - 1.32 0.293 2.17
The test is a single out-of-place FFT of 2^20 elements of complex float values. The arm results are for a single core.
- noop
- This just performs the memory transfers but does no work on the data.
- fft
- The fft is performed on the data using two passes of a radix-1024 step. The radix-1024 step is implemented as 5 passes of radix-4 calculations. Twiddle factors are calculated on-core for each radix-4 pass.
The first outer pass takes every 1024th element of input, performs a 1024 element FFT on it, and then writes it back sequentially to memory (out-of-place).
The second outer pass takes every 1024th element, performs a 1024 element FFT on it but with a phase shift of 1/(2^20), and then writes it back to the same memory location they came from (in-place).
- ffts
- Using the Fastest Fourier Transform in the South on an ARM core. This is a code-generator that generates a NEON optimised plan.
- radix-2
- A out-of-place radix-2 implementation which uses a pre-calculated lookup-table for the sin/cos tables. The lookup-table calculation is not included but it takes longer than the FFT itself.
Discussion
Firstly, it's a bit quicker than I thought it would be. I guess my previous DMA timing tests were out a little bit last time I looked or were broken in some way. The arm results (outside of ffts) are for implementations which have particularly poor memory access patterns which explains the low performance - they are pretty close to worst-case for the ARM data cache.
However as expected the limiting factor is memory bandwidth. Even with simple synchronous DMA transfers diminishing returns are apparent after 2 active cores and beyond 4 cores is pointless. Without the memory transfers the calculation is around 10x faster in the 16-core case which shows how much memory speed is limiting performance.
Compared to complexity of ffts (it was the work of a PhD) the performance isn't too bad. It is executing a good few more flops too because it is calculating the twiddle factors on the fly. 1 396 736 sin/cos pairs are calculated in total.
Further stuff
I've yet to try asynchronous DMA for interleaving flops and mops. Although this would just allow even fewer cores to saturate the memory bus.
Currently it runs out-of-place, which requires 2x8MB buffers for operation. I can't see a way to re-arrange this to become in-place because of the butterfly of the data required between the two outer passes.
The memory transfers aren't ideal for the parallella. Only the write from the first pass is a sequential DMA which increases the write bandwidth. The only thing I can think of here is to group the writes across the epiphany chip so at least some of the writes can be in sequential blocks but this would add complication and overheads.
The inner loops could be optimised further. Unrolling to help with scheduling, assembly versions with hardware loops, or using higher radix steps.
The output of the last stage is written to a buffer which is then written using DMA, it also requires a new pair of twiddle factors calculated for each step of the calculation which passes through an auxiliary table. These calculations could be in-lined and the output written directly to the external memory (although the C compiler only uses 32-bit writes so it would require assembly language).
The code is hard-coded to this size but could be made general purpose.
The implementation is small enough it leaves room for further on-core processing which could help alleviate the bandwidth issue (e.g. convolution).
It should be possible to perform a radix-2048 step with the available memory on-core which would raise the upper limit of a two-pass solution to 2^22 elements, but this would not allow the use of double-buffering.
I haven't implemented an inverse transform.
static resource init
I noticed some activity on a thread in the parallella forum and whilst following that up I had another look at how to get the file loader to automgically map symbols across multiple cores.
Right now the loader will simply resolve the address to a core-local address and the on-core code needs to manually do any resolution to remote cores using ez_global_address() and related functions.
First I tried creating runtime meta-data for symbols which let the loader know how the address should be mapped on a given core. Once I settled on some basic requirements I started coding something up but quickly realised that it was going to take quite a bit of work. The first approach was to use address matching so that for example any address which matched a remapped address could be rewritten as necessary. But this wont work because in-struct addresses could fall through. So then I thought about using the reloc record symbol index. This could work fine if the address is in the same program on a different core, but it falls over if the label is in a different program; which is the primary case i'm interested in. Then I looked into just including a symbolic name but this is a bit of a pain because the loader then needs to link the string pointers before it can be used. So I just copied the symbol name into a fixed sized buffer. But after all that I threw it away as there were still a couple of fiddly cases to handle and it just seemed like too much effort.
So then I investigated an earlier idea I had which was to include some meta-data in a mangled symbol name. One reason I never followed it up was it seemed too inconvenient but then I realised I could use some macros to do some of the work:
#define REF(var, row, col) _$_ ## row ## $_ ## col ## $_ ## var
// define variable a stored on core 0,0
extern int REF(a, 0, 0);
// access variable a on core 0,0
int b = REF(a, 0, 0);
This just creates a variable named "_$_0$_0$_a" and when the loader goes to resolve this symbol it can be mapped to the variable 'a' on the specific core as requested (i'm using _N to map group-relative, or N to map this-core-relative). The same mechanism just works directly for symbols defined in the same program as well since the linker doesn't know of any relation between the symbols.
So although this looked relatively straightforward and would probably work ... it would mean that the loader would need to explicitly relink (at least some of) the code for each core at load-time rather than just linking each program once and copying it over as a block; adding a lot of on-host overheads which can't be discounted. It didn't really seem to make much difference to code size in some simple tests either.
So that idea went out the window as well. I think i'll just stick to hand coding this stuff for now.
resources
However I thought i would revisit the meta-data section idea but use it in a higher level way: to define resources which need to be initialised before the code executes - e.g. to avoid race conditions or other messy setup issues.
I coded something up and it seems to be fairly clean and lean; I might keep this in. It still needs a bit of manipulation of the core image before it gets copied to each core but the processing is easier to separate and shouldn't have much overhead.
An example is initialising a port pair as used in the workgroup-test sample. There is actually no race condition here because each end only needs to initialise itself but needing two separate steps is error prone.
First the local endpoint is defined with a static initialiser and then it's reference to the remote port is setup at the beginning of main.
extern ez_port_t portb EZ_REMOTE;
ez_port_t porta = {
.mask = 4-1
};
main() {
int lid = ez_get_core_row();
ez_port_setup(&aport, ez_global_core(&bport, lid, 0));
... can now use the port
}
This sets up one end of a port-pair between two columns of cores. Each column is loading a different program which communicates with the other via a port and some variables.
Using a resource record, this can be rewritten as:
extern ez_port_t bport EZ_REMOTE;
// includes the remote port which the loader/linker will resolve to the
// lower 15 bits of the address
ez_port_t aport = {
.other = &bport,
.mask = 4-1
};
// resource record goes into .eze.restab section which is used but not loaded
// the row coordinate is relative, i.e. 'this.row + 0',
// and the column of the target is absolute '1'.
static const struct ez_restab __r_aport
__attribute__((used,section(".eze.restab,\"\",@progbits ;"))) = {
.flags = EZ_RES_REL_ROW | EZ_RES_PORT,
.row = 0,
.col = 1,
.target= &aport
};
main() {
... can now use the port
}
It looks a bit bulky but it can be hidden in a single macro. I guess it could also update an arbitrary pointer too and this isn't far off what I started with - but it can only update data pointers not in-code immediate pointer loads that elf reloc records allow for.
Not sure what other resources I might want to add though and it's probably not worth it for this alone.
cexpi, power of 2 pi
I did a bit more playing with the sincos stuff and ended up rolling my own using a taylor series.
I was having issues with using single floating point to calculate the sincos values, the small rounding errors add up over a 1M element FFT. I tried creating a fixed-point implementation using 8.24 fixed point but that was slightly worse than a float version. It was quite close in accuracy though, but as it requires a 16x32-bit multiply with a 48-bit result, it isn't of any use on epiphany. For the integer algorithm I scaled the input by 0.5/PI which allows the taylor series coefficients to be represented in 8.24 fixed-point without much 'bit wastage'.
My final implementation uses an integer argument rather than a floating point argument which makes it easier to calculate the quadrant the value falls into and also reduces rounding error across the full input range. It calculates the value slightly differently and mirrors the angle in integer space before converting to float to ensure no extra error is added - this and the forming of the input value using an integer is where the additional accuracy comes from.
I tried a bunch of different ways to handle the swapping and ordering of the code (which surprisingly affects the compiler output) but the following was the best case with the epiphany compiler.
/*
Calculates cexp(I * x * M_PI / (2^20))
= cos(y) + I * sin(y) |y = (x * M_PI / (2^20))
where x is a positive integer.
The constant terms are encoded into the taylor series coefficients.
*/
complex float
ez_cexpii(int ai) {
// PI = 1
const float As = 3.1415926535897931159979634685441851615906e+00f;
const float Bs = -5.1677127800499702559022807690780609846115e+00f;
const float Cs = 2.5501640398773455231662410369608551263809e+00f;
const float Ds = -5.9926452932079210533800051052821800112724e-01f;
const float Es = 8.2145886611128232646095170821354258805513e-02f;
const float Fs = -7.3704309457143504444309733969475928461179e-03f;
const float Bc = -4.9348022005446789961524700629524886608124e+00f;
const float Cc = 4.0587121264167684842050221050158143043518e+00f;
const float Dc = -1.3352627688545894990568285720655694603920e+00f;
const float Ec = 2.3533063035889320580018591044790809974074e-01f;
const float Fc = -2.5806891390014061182789362192124826833606e-02f;
int j = ai >> (20-2);
int x = ai & ((1<<(20-2))-1);
// odd quadrant, angle = PI/4 - angle
if (j & 1)
x = (1<<(20-2)) - x;
float a = zfloat(x) * (1.0f / (1<<20));
float b = a * a;
float ssin = a * ( As + b * ( Bs + b * (Cs + b * (Ds + b * (Es + b * Fs)))));
float scos = 1.0f + b * (Bc + b * (Cc + b * (Dc + b * (Ec + b * Fc))));
int swap = (j + 1) & 2;
if (swap)
return bnegate(ssin, j+2, 2) + I * bnegate(scos, j, 2);
else
return bnegate(scos, j+2, 2) + I * bnegate(ssin, j, 2);
}
This takes about 90 cycles per call and provides slightly better accuracy than libc's sinf and cosf (compared on amd64) across the full 2PI range. The function is 242 bytes long. If the taylor series only used 5 terms is about as accurate as libc's sinf and cosf and executes 6 cycles faster and fits in only 218 bytes.
Apart from accuracy the other reason for having a power of 2 being PI is that this what is required for the FFT anyway so it simplifies the angle calculation for that.
Whether this is worth it depends on how long it would take to load the values from a 1M entry table
As per the previous post a radix-1024 fft pass (using radix-4 fft stages) requires 341 x 2 = 682 sincos pairs for one calculation. For the first pass this can be re-used for each batch but for the second values must be calculated (or loaded) each time. This is approximately 61K cycles.
The calculation of the radix-1024 pass using radix-4 stages itself needs 1024 / 4 x log4(1024) x 8 complex multiply+add operations, or at least 40K cycles (complex multiply+add using 4xfma) plus some overheads.
Going on the memory profiling i've done so far, it's probably going to be faster calculating the coefficients on-core rather than loading them from memory and saves all that memory too.
sincos
So i've continuted to hack away on a 1M-element fft implementation for epiphany.
The implementation does 2xradix-1024 passes. The first just does 1024x1024-element fft's, and the second does the same but on elements which are strided by 1024 and changes the twiddle factors to account for the frequency changes. This lets it implement all the reads/writes using LDS and it only has to go to global memory twice.
At this point i'm pretty much doing it just for something to do because I did some testing with ffts and that can execute on a single arm core about 2x faster than the epiphany can even read and write the memory twice because all global memory accesses are uncached. According to some benchmarks I ran anyway - assuming the rev0 board is the same. Given that it's about the same speed for accessing the shared memory in the same way from the ARM (it's non-cached) I think the benchmark was valid enough.
Anyway one problem is that it still needs to load the twiddle factors from memory and for the second pass each stage needs it's own set. i.e. more bandwidth consumption. Since there's such a flop v mop impedance mismatch I had a look into generating the twiddle factors on the epiphany core itself.
Two issues arise: generating the frequency offset, and then taking the cosine and sine of this.
The frequency offset involves a floating-point division by a power of two. This could be implemented using fixed-point, pre-calculating the value on the ARM (or at compile time), or a reciprocal. Actually there's one other way - just shift the exponent on the IEEE representation of the floating point value directly. libc has a function for it called scalbnf() but because it has to be standards compliant and handle edge cases and overflows it's pretty bulky. A smaller/simpler version:
float scalbnf(float f, int logN) {
union {
float f;
int i;
} u;
u.f = f;
if (u.i)
u.i += (logN << 23);
return u.f;
}
This shifts f by logN - i.e. multiplies f by 2^N, and allows division by using negative logN. i.e. divide by 1024 by using logN=-10.
The second part is the sine and cosine. In software this is typically computed using enough steps of a Taylor Series to exhaust the floating point precision. I took the version from cephes which does this also but has some pre-processing to increase accuracy with fewer steps; it's calculation only works over over PI/4 and it uses pre and post-processing to map the input angle to this range.
But even then it was a bit bulky - standards compliance/range checking and the fpu mode register adds a bit of code to most simple operations due to hardware limitations. Some of the calculation is shared by sin and cos, and actually they both include the same two expressions which get executed depending on the octant of the input. By combining the two, removing the error handling, replacing some basic flops with custom ones good enough for the limited range and a couple of other tweaks I managed to get a single sincosf() function down to 222 bytes verses 482 for just the sinf() function (it only takes positive angles and does no error checking on range). I tried an even more cut-down version that only worked over a valid range of [0,PI/4] but it was too limited to be very useful for my purposes and it would just meant moving the octant handling to another place so wouldn't have saved any code-space.
I replaced int i = (int)f with a custom truncate function because I know the input is positive. It's a bit of a bummer that FIX changes it's behaviour based on a config register - which takes a long time to modify and restore.
I replaced float f = (float)i with a custom float function because I know the input is valid and in-range.
I replaced most of the octant-based decision making with non-branching arithmetic. For example this is part of the sinf() function which makes sign and pre-processing decisions based on the input octant.
j = FOPI * x; /* integer part of x/(PI/4) */
y = j;
/* map zeros to origin */
if( j & 1 ) {
j += 1;
y += 1.0f;
}
j &= 7; /* octant modulo 360 degrees */
/* reflect in x axis */
if( j > 3 ) {
sign = -sign;
j -= 4;
}
... calculation ...
if( (j==1) || (j==2) ) {
// use the sin polynomial
} else {
// use the cos polynomial
}
if(sign < 0)
y = -y;
return( y);
This became:
j = ztrunc(FOPI * x);
j += (j & 1);
y = zfloat(j);
sign = zbnegate(1.0f, j+1, 2);
swap = (j+1) & 2;
... calculation ...
if( swap ) {
// use the sin polynomial
} else {
// use the cos polynomial
}
return y * sign;
Another function zbnegate negates it's floating point argument if a specific bit of the input value is set. This can be implemented straightforward in C and ends up as three CPU instructions.
static inline float zbnegate(float a, unsigned int x, unsigned int bit) {
union {
float f;
int i;
} u;
u.f = a;
u.i ^= (x >> bit) << 31;
return u.f;
}
Haven't done any timing on the chip yet. Even though the original implementation uses constants from the .data section the compiler has converted all the constant loads into literal loads. At first I thought this wasn't really ideal because double-word loads could load the constants from memory in fewer ops but once the code is sequenced there are a lot of execution slots left to fill between the dependent flops and so this ends up scheduling better (otoh such an implementation may be able to sqeeze a few more bytes out of the code).
Each radix-1024 fft needs to calculate 341x2 sin/cos pairs which will add to the processing time but the number of flops should account for the bandwidth limitations and it saves having to keep it around in memory - particularly if the problem gets any larger. The first pass only needs to calculate it once for all batches but the second pass needs to calculate each individually due to the phase shift of each pass. Some trigonometric identities should allow for some of the calculations to be removed too.
borkenmacs
Emacs is giving me the shits today. It always used to be rock-solid but over the last few years the maintainers have been breaking shit left right and centre.
- Indenting is slower than it used to be (and it was already pretty slow) and keeps breaking all the time - I have to keep quitting a file when it decides it's lost the plot and reload it;
- Some fool thought that a line-oriented text-editor should track the cursor by visual lines by default;
- c-macro-expand comes broken as packaged with a useless error message whenever you run it (incorrect cpp definition);
- Locally rendered text bitmaps. i.e. slow text rendering, and slow as all shit over remote X11. And no misc-fixed fonts unless you hunt for them - i.e. shit fonts;
- Shitty menu's and a really pointless toolbar that need to be turned off. The toolbar often breaks focus if it's left on;
- Crappy colours, but then again most GNU/linux tools suffer from that and it's probably more to do with distro wank.
Some of it can be fixed by local configuration, but the rest is probably best fixed by going back to a previous version and compiling without the gtk+ toolkit.
Update: Oh i almost forgot because I haven't seen it in a while, bloody undo got broken at some point. It's particularly broken on ubuntu for some reason (where I just saw it, I don't normally use ubuntu but that's what parallella uses). Sometimes editing a file you can only undo a few lines. This is particularly bad if it's in the middle of a kill-yank-undo cycle as you can end up with unrecoverably lost work.
For a text editor I can think of no higher sin.
bandwidth. limited.
I've been continuing to experiment with fft code. The indexing for a general-purpose routine got a bit fiddly so I went back to the parallella board to see about some basics first.
The news isn't really very good ...
I was looking at a problem of size 2^20, I was going to split it into two sections: 1024 lots of 1024-element fft's and then another pass which does 1024 lots of 1024-element fft's again but of data strided by 1024. This should mean the minimum external memory accesses - two full read+write passes - with most working running on-core. There's no chance of keeping it all on core, and even then two passes isn't much is it?
But I estimate that a simple radix-2 implementation executing on only two cores will be enough to saturate the external memory interface. FFT is pretty much memory bound even on a typical CPU so this isn't too surprising.
more fft musings
I was kinda getting nowhere with the fft code but then I saw something about doing large fft's on the parallella and it gave me some ideas to keep poking.
I did some experimenting with the memory access patterns (and towards the end had a dejavu moment - i did most of this last year too, ...) and a radix-8 implementation and started thinking about breaking a big problem onto the epiphany. I'm still hand-expanding the expressions using the basic radix-2 algorithm which is not optimal I believe but the more closer I get to grokking it the easier it will be to understand the more sophisticated implementations.
I was thinking of trying to use the mesh network as a sort of bandwidth multiplier, but the LDS is already one so i'll leave those experiments till later. It should be possible to do something like a 16K element fft on the board quite efficiently but one has to work out the addressing logic first.
So I took the first-stage of my algorithm which performs the bit-reversal permute and a radix-N transform (decimation in time) and converted it to a chunked version. The data transfer works out quite nicely - a single regular sparse DMA can load in the input data which can then be 'recursively' transformed in-place up to the available LDS size - 1K elements allows for double buffering. This can then be written out as a contiguous block ready for subsequent passes. I did some preliminary work on the subsequent passes too and I think I can do them in a similar way - and do multiple passes on-core before dumping it out. The trick will be to process the data in a way that writes can be grouped together at least some of the time.
So I think the data is doable in a fairly efficient way and the LDS can be used as a bandwidth multiplier. And if the LDS is effectively always doing the same-sized fft some other code efficiencies may come into play.
I haven't looked at the coefficient tables (aka 'twiddle' factors) yet - although i think there should be some regularities there as well.
Time passes ...
I had a closer look today after starting to write this this morning (it's now evening). The coefficients for the radix-4 algorithm turn out to be quite simple the way i'm stepping through everything and I only need to load 2 values for the 4 coefficients required for a radix-4 step. I then use them as a, a, b, ib (where i = sqrt(-1)). And they are evenly spaced across the table so they can be loaded using 2x dma - i should have enough memory to double-buffer the loads (hmm, or maybe just fit them all in for a given operation since the smaller stages need fewer coefficients).
I also started looking at how to break the stages into smaller parts. I broke a 256 element fft into two phases: first phase which includes the reordering which does 2x radix-4 passes for an equivalent of radix-16. It has to do 16x of these. I can then complete the job by doing another 16x 2x radix-4 passes using data spaced 16 elements apart. In this way each 'fft processor' only requires 16 elements of storage and the memory only has to be read and written twice in total. Scaling this up to what I can fit on the epiphany ... I could do a 1M element fft by performing radix-1024 stages on each core. I would have to do 2K of these.
Normally this would be a pretty inefficient memory access pattern with L1 cache unless you could load the data synchronously across the cores (if only the dma engines had 32-bit modulos) but given the memory interface on the epiphany and the lack of cache and burst mode reads I don't think it matters so much. It still affects writes but if that's a problem there are ways around it.
I haven't yet tried any of this on the epiphany though. On my workstation I did compare it to FFT Packaqe and although it's not as fast it's in the same city and i haven't fully broken it up into the way it needs to fit onto the epiphany yet. I'm just writing the expressions out in full using complex float and letting the compiler optimise all the sub-expressions and so forth.
Speaking of the compiler I tried some really basic ops to see how it did things, a bit odd but probably because it's sitting in a function:
complex float foo1(complex float a, complex float b, complex float w) {
return a + b * w * I;
}
complex float foo0(complex float a, complex float b, complex float w) {
return a + b * w;
}
I was just confirming that * I got converted into swapping (re,im) to (-im,re).
Compiler does this:
1 .file "e-boo.c"
2 .section .text
3 .balign 4
4 .global _foo1
5 _foo1:
6 0000 4C150044 ldr r16,[sp,#2]
7 0004 EF840220 mov ip,r1
8 0008 CC350044 ldr r17,[sp,#3]
9 000c 2F2C0701 fmul r1,r3,r16
10 0010 3F880721 fmadd ip,r2,r16
11 0014 BF280701 fmadd r1,r2,r17
12 0018 CF8C0721 fmsub ip,r3,r17
13 001c 9740 fsub r2,r0,r1
14 001e EF300204 mov r1,ip
15 0022 E208 mov r0,r2
16 0024 4F190204 rts
17 .size _foo1, .-_foo1
18 .balign 4
19 .global _foo0
20 _foo0:
21 0028 4C150044 ldr r16,[sp,#2]
22 002c CC350044 ldr r17,[sp,#3]
23 0030 2F8C0721 fmul ip,r3,r16
24 0034 3F080701 fmadd r0,r2,r16
25 0038 BF880721 fmadd ip,r2,r17
26 003c EF000240 mov r16,r0
27 0040 CF0C0741 fmsub r16,r3,r17
28 0044 8F900724 fadd ip,ip,r1
29 0048 EF000208 mov r0,r16
30 004c EF300204 mov r1,ip
31 0050 4F190204 rts
32 .size _foo0, .-_foo0
It's doing the I * thing properly which is expected. But the way the result is formed isn't ideal. I think the delay slots added would be:
ldr r16,[sp,#2] ; w.r
ldr r17,[sp,#3] ; w.i
fmul ip,r3,r16 ; b.i * w.r
fmadd r0,r2,r16 ; a.r += b.r * w.r
;;
;;
;;
fmadd ip,r2,r17 ; b.i * a.r + b.r * w.i
mov r16,r0 ; dual issue
fmsub r16,r3,r17 ; a.r -= b.i * w.i
;;
;;
;;
fadd ip,ip,r1 ; a.i += (b.i * a.r + b.r * w.i)
mov r0,r16
;;
;;
;;
mov r1,ip
Just using fma operations instead of a separate mul and add removes the need for an extra step, needs fewer instructions, and leaves the result in the right register anyway:
;; a.r += b.r * w.r - b.i * w.i;
;; a.i += b.i * w.r + b.r * w.i;
ldr r16,[sp,#2] ; w.r
ldr r17,[sp,#3] ; w.i
fmadd r0,r2,r16 ; a.r += b.r * w.r
fmadd r1,r2,r17 ; a.i += b.r * w.i
;;
;;
;;
fmsub r0,r3,r17 ; a.r -= b.i * w.i
fmadd r1,r3,r16 ; a.i += b.i * w.r
This isn't really a big issue because normally there is more work going on and more opportunity to schedule things to fill the gaps, but I have noticed it isn't using the fma stuff as much as it might.
On a related note (not directly something i've seen out of the compiler) this is the basic step in an fft calculation:
complex float b0 = a0 + w0 * a1;
complex float b1 = a0 - w0 * a1;
One would normally expect a common sub-expression optimisation to (correctly, well perhaps: floats are very problematic in general and standards compliance might dictate a certain operational order - i'm using -ffast-math which ignores these details) turn this into:
complex float w0a1 = w0 * a1;
complex float b0 = a0 + w0a1;
complex float b1 = a0 - w0a1;
And "on paper" it looks like an improvement: 1 complex multiply and 2 additions compared to 2 and 2; particularly since a complex multiple is 4x multiples and 2x additions itself.
But yeah, it isn't; actually it's worse ... The first can be represented by 8x multiplies and 8x additions in 10 instructions:
mov b1r,a0r ; dual issues
fmadd a0r,w0r,a1r
mov b1i,a0i ; dual issues
fmadd a0i,w0r,a1i
fmsub b1r,w0r,a1r
fmsub b1i,w0r,a1i
;;
fmsub a0r,w0i,a1i
fmadd a0i,w0i,a1r
fmadd b1r,w0i,a1i
fmsub b1i,w0i,a1r
The second can be represented as 4 multiplies and 6 additions in 8 instructions. Better!? No? It adds another delay stage requirement in the pipeline and has less work to do in each wasting further execution slots.
fmul w0a1r,w0r,a1r
fmul w0a1i,w0r,a1i
;;
;;
;;
fmsub w0a1r,w0i,a1i
fmadd w0a1i,w0i,a1r
;;
;;
;;
fsub b1r,a0r,w0a1r
fadd a0r,a0r,w0a1r
fsub b1i,a0i,w0a1i
fadd a0i,a0i,w0a1i
Assuming the dual issue ialu ops, the first takes 9 cycles and the second 14.
Again, further work to do or loop-unrolling will hide the delay slots, but it just shows you can't go on pure flops as a performance indicator.
On a slightly unrelated note I also had a quick look into extended precision arithmetic and a few other little bits and pieces. I found the paper ``Extended-Precision Floating-Point Numbers for GPU Computation'' by Andrew Thall that provides an extended precision floating point format using 2x 32-bit floating point numbers. I didn't get around to fully exploring it but it may be of use on the epiphany for certain applications since software doubles are not fast.
Copyright (C) 2019 Michael Zucchi, All Rights Reserved.
Powered by gcc & me!