The original proposal, 2 years ago

discussion of forwardcom instruction set and corresponding hardware and software

Moderator: agner

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

A null register?

Post by agner » Thu Nov 02, 2017 1:29 pm

Author: csdt Date: 2016-09-23 12:55
Very interesting project.

What if there were a special named register (eg: v0) with a special meaning:
- v0 always contains zeroed bits
- using v0 as input or output doesn't create any dependency.
We could have the same for r0.

This would be used to quickly get a zero value without creating any false dependency. I really don't like the x86 xor hack.
It could be used with any instruction, and can provide an easy way to compute some stuff in one single instruction without having a new instruction for it:
- negation: sub.f v1, v0, v1
- bitwise not: and_not v1, v0, v1
- getting -1 (int): and_not v1, v0, v0
- getting -0.0: set_bit v1, v0, $31
- compare to zero: compare v1, v1, v0, $whatever_the_comparison_is

It may also be used to discard a result:
- prefetching: move v0, [ r1, length=16]

The main drawback I can see is a register name wasted. It would also make the register renaming a bit more complicated (I think), but not so much afaiu.

I would be pleased to know what do you think of this.


Author: Agner Date: 2016-09-24 00:25
Some instruction sets have a null register. But, as you say, it is a register name wasted.

ForwardCom uses the principle of a null register in three cases:
  • Specifying r0 or v0 as a mask means no mask and unconditional execution.
  • Specifying the stack pointer (r31) as an array index means no index.
  • Specifying the stack pointer (r31) for vector length means a scalar.
There is no need for a register with the fixed value zero because a constant can be embedded in the instruction, and this constant can be zero or any other value.

Tiny instructions (half size) can have a 4-bit signed constant. Single size instructions (32 bits) can have a constant of 8 or 16 bits. Larger instructions can have constants of 32 bits (support for 64 bits constants is optional). This makes it possible to set a register to zero or any other simple constant without increasing the size of the instruction. There are various methods for compressing larger constants, and it also works for floating point values.


Author: Hubert Lamontagne Date: 2016-09-24 09:39
The nice thing about a null register is that it lets you remove some opcodes. MIPS does this:

- There is no actual MOV instruction. It's actually replaced by "OR $rd, $rx, $rx", and "ADD $rd, $zero, imm", and "LUI $rd, $zero, imm".
- This lets you do absolute addressing: "LW $rd, offset($zero)".
- NEG is of course "SUB $rd, $zero, $rx".
- NOT is "NOR $rd, $zero, $rx".
- You can store 0 to memory directly: "SW $zero, offset($ra)".
- Comparison instructions like BEQ, BEQL, BNE, BNEL don't have an immediate field (since it's used by the branch offset) but can still compare to 0, which is a very common value to compare with.
- As previously mentioned, values can be discarded by writing them to $zero (I'm not sure what's the point on MIPS though since it has an actual PREF instruction).

Generally, whether or not this is a gain tends to depend on register file size:
- SuperH and x86 have 8 registers so the 8th register is way more important than a hardwired zero.
- ARM has 16 registers and has opted not to have a zero register either (though it does have the PC mapped to a register for some reason).
- MIPS and clones such as RISCV, ALPHA, ARM64, PA-RISC have 32 registers and C++ compiler register allocators rarely fill all 32, so the very slightly reduced number of opcodes is a small but useful gain. Note that ARM64 uses the same register name for the stack pointer and zero register (which one is used depends on the opcode). One of the downsides of a zero register is that it does make software emulation more complex (since you have to check that operation destinations aren't the zero register). I also wonder if this has made out-of-order register renamers more complex or not. And considering that ForwardCom has a ton of opcodes already, I don't think removing a few ones with a zero register has that much impact anyways.


Author: csdt Date: 2016-09-26 03:43
Thank you for your hindsight.

Is it possible to use any vector instruction with an immediate scalar broadcasted?
In my mind, that was a use case for the null register.

Discarding the result is not so interesting with ForwardCom as there is no instruction with 2 outputs.
Otherwise, It could have been interesting for the division (discards the remaining).


Author: Agner Date: 2016-09-27 01:21
csdt wrote:
Is it possible to use any vector instruction with an immediate scalar broadcasted?
Yes. The immediate value can have different sizes, integer or float or a small signed integer converted to float.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Indexed registers

Post by agner » Thu Nov 02, 2017 1:32 pm

Author: Kurt Baumgardner Date: 2016-09-26 22:00
Ok, here's an idea I haven't heard of: Set up some registers: ProgramID, ThreadID, and ContextID. These are controlled by the CPU and set up by the OS when spawnning a program or thread. The app programmer only has access to ContextID. The bits of each of these combine to address a memory buffer for the CPU registers. A thread or program task switch simply indexes a different set of registers - no need to preserve registers between task switches. You could even include some instructions for register exchange (EXX) that simply change ContextID. Maybe this mechanism could also supply register renaming functionality...?

The actual size of the registers is implementation specific, it just depends on how the address bits are devied up. This supports the variable vector registers. Of course this would have to be a very fast memory. The idea came from the Page 0 stuff you could do on the good ol' 6502, where the first page of memory could be addressed with 1 byte.

If the actual registers were separate, and the register page could be copied from/to the real registers, you could use ContextID to fill all registers at once,, for very fast function initialization. The compiler could place instructions at program start that would fill up register contexts for critical functions, and call up that register set when entering each function. There's a lot of possibilities. It all depends on the ability to read from/write to this register page very quickly, and the ability to put enough of this memory on the chip. The amount of memory would limit the number of concurrent programs, threads and contexts, as well as the number and size of registers. Maybe you'd have to consider a program and a thread the same thing, which could reduce the size of this memory somewhat.

If done right, I think this could really speed up task switches and function initialization and save stack space. Another idea could be a partial register exchange. ProgramID and ThreadID (or just ThreadID) point you to the proper register bank, but a special command could cause, say R0 thru R15 to always read from ContextID 0, but R16 thru R31 would read from the Context set by this special bank switch command. For a 2-bit ContextID, that provides you another 64 registers to use, 1 bank of 16 at a time. So R0 thru R15 stay the same, but R16 thru R31 are pulled from 1 of 4 banks. Or you do the full context switch, for 32 out 128.

I'm not sure how this would affect other processes, but on the surface, it sounds powerful.

Please note that I am describing 2 opposing possible setups:
Setup #1: The registers live in this memory block, and are one and the same.
Setup #2: The registers are read from this memory block with a GetContext command, and written to the memory block with a PutContext command.

I see benefit in both setups. Unfortunately, I think either setup might have performance problems, unless some very specialized burst read/write hardware is built. But it could be a joy to program. Imagine placing all your immediates and constants into a register context once at the start of your program. Then, when you call your function, a single command sets up all your constants, your loop counts, and your buffer pointers, without using the stack, or even setting registers. Compilers would have to be a bit different, but we're trying for a better design, right?
Could be interesting. You might even be able to justify reducing your number of registers, and depend on bank swapping, which would reduce your instruction set considerably.

Maybe a routine could "push it's context" to a function to achieve by-reference registers. Or copy it's context into another for by-value, without using the stack, or explicitly setting registers.

Is it a crazy idea? It is complex, but could be very powerful. I wonder if it is practical.

Ok, here's my last idea: If the above cannot be done, how about an instruction that pushes registers onto the stack, based on a mask. The push-by-mask opcode is followed by an immediate 32-bit value, with 1 bit dedicated to each register. Registers are always pushed in the same order. To push r0 thru r7, you'd use an immediate value of 011111111b. Or, to push r2, you'd use 0100b. I guess this would be slow, but it would be compact, and useful. Pop would, of course reverse, so the same mask could be used.


Author: Agner Date: 2016-09-27 01:46
My proposal has some features for speeding up context switches, but not quite as radical as you are proposing.

The memory map is so small that it can be contained on the chip. There are separate memory maps for system code. In benign cases where a process uses only a small part of the total memory map area on the chip, you may swap memory maps when switching processes.

All code is addressed relative to the instruction pointer and all read/write memory is addressed relative to the data pointer and the stack pointer. This means that everything is addressed relative to just three registers which can quickly be changed. There are separate registers that are only accessible to system code. These can be useful in a device driver. The device driver needs access to the registers of the application program because these registers (actually, only half of them) can be used for parameter transfer to the device driver. The extra registers can be used by the device driver for temporary storage of the application's registers.

I interpret your Setup # 1 as something like register banks that can be swapped. This may be added as an optional feature on ForwardCom, but I don't know if the costs in terms of central silicon space can justify it.

Your setup # 2 is reading and writing all registers to some memory area or cache. I have avoided an instruction for "read all registers" or "write all registers" because I want to avoid complex micro-coded instructions and because these instructions are very slow on x86 (I don't know about other architectures). Maybe a special on-chip cache for the sole purpose of saving registers on context switch would be useful.


Author: Kurt Baumgardner Date: 2016-09-27 21:49
Makes sense. Yeah, the only way setup #2 would be worth it is if the registers could all be stored in a single cycle, like a latch, which would require an unconventional memory with a very wide data bus. Probably not practical. My goal has been to present new ideas, and let you smarter guys consider the possibilities and practicality. If this ability could be built, it would radically change the way functions are built, and executed, and it has support for your variable vector widths, which made it seem like a nice fit. I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup. It would be something like inlining your whole program. Oh well :) Your current setup seems like a nice compromise.


Author: Agner Date: 2016-09-28 00:53
Kurt Baumgardner wrote:
I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup.
If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls.


Author: Hubert Lamontagne Date: 2016-09-28 02:34
* For putting all registers in a cache-like memory block:
This would require a very large register file. For instance, the integer register file on some recent Intel cores has 168 registers. For ForwardCom's 32 registers, you can fit 5 whole contexts in there. But you have to reserve some registers for renamed register values that haven't been retired yet for out-of-order execution to operate on, so you need more than that. For running a modern OS, you'd probably need dozens and dozens of contexts, which would require something as large as a data cache. Ex: 8 bytes per register * 32 registers * 64 contexts = 16kb - that's the size of the whole L1 data cache on some modern CPUs!

More reasonable versions of this already exist though - I think x86 already does this for HyperThreading, with 2 different contexts at the same time. Another case where it's really useful to have just 2 bankswitched sets of registers is context switching between the user program and the OS - for handling page fault exceptions and memory allocations and software TLBs and software emulated instructions and so forth. Often, this involves only bankswitching only a few registers, such as user SP versus system SP (some ARM systems).

* For saving/restoring registers in a cache-like memory block with separate instructions:
It's going to be hard to save/restore all the active registers of a context at the same time because your physical registers will be assigned to an unpredictable subset of registers. Which means that you'd need to have 32 register read/write ports to read/write all your registers the same time. Afaik, register files with that many ports are large and power hungry. And presumably you'd need a special multiplexing mechanism to share those ports with the read/write accesses of normal operation.

You also still has the problem that your cache-like memory block has a finite size, which means that sooner or later the OS will run out and will start to read/write parts of it to main ram anyways - which is what this mechanism was trying to prevent in first place!

In theory you could map this special memory to be backed by RAM which solves size limits, but then all operations that read or write to your special memory become potential RAM operations and have to be issued potentially in-order with other memory operations and compete for data cache access ports, and can potentially trigger interrupts which means that either the CPU has to run those operations in order (with stalls when waiting), or all those operations have to run speculatively and be undoable (which means that the previous state has to be preserved until your operation can be confirmed as having run for real). Even without RAM mapping, you'd probably still need a write buffer.

* For having load multiple/store multiple instructions
Instructions that push/pull a whole bunch of registers on stack at the same time do exist. ARM has one. 68000 has one. It's nice for improving code density. The problem is that it's still an instruction that generates essentially up to 32 memory writes/reads at the same time (plus a stack pointer update). Data caches can't really handle that - they can handle at best 2 reads + 1 write per cycle. The register file can't handle that either. I'm not sure it can run faster than just a simple normal series of read/write operations, plus it would probably have to be implemented as microcode.


Author: Kurt Baumgardner Date: 2016-10-03 21:34
Thank you for the detailed answers Agner and Hubert.

Hubert Lamontagne wrote:
* For putting all registers in a cache-like memory block:
This would require a very large register file....

* For saving/restoring registers in a cache-like memory block with separate instructions:
...(more issues)...
Well, that's a shame. I've been looking at some 80x86 disassembly of a C .exe recently, that contained a lot of function calls. Most of them do a bunch of PUSHes, read some stack vars, followed by the function's actual purpose, and end with the POPs, and finally the return. And, of course that's in the majority of function calls: this constant pattern of register save, fill, use, reload, followed by return (another stack pop). Seems like there should be a better way. Maybe instead of a set for every program/thread, you could have just enough space for a few levels deep of registers. This would correspond with the "call stack". Since most calls come with PUSHes, and most returns are proceeded by POPs, it seems like a prime candidate for optimization, for both speed and code size.

Maybe that's still too complex or slow for hardware, though.

Agner wrote:
If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls.
Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example.


Author: Agner Date: 2016-10-03 23:54
Kurt Baumgardner wrote:
Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example.
The compiler has to obey the function calling convention if the function is publicly visible. The calling convention for 32-bit x86 generates a lot of push and pop. The most efficient calling convention is in 64-bit x86 on Linux. Gnu, Clang and Intel C++ compilers can be quite efficient when the relevant optimization options are on.
See my calling convention manual at www.agner.org/optimize/#manuals


Author: Hubert Lamontagne Date: 2016-10-04 10:57
Kurt Baumgardner wrote:
[...]
Well, that's a shame. I've been looking at some 80x86 disassembly of a C .exe recently, that contained a lot of function calls. Most of them do a bunch of PUSHes, read some stack vars, followed by the function's actual purpose, and end with the POPs, and finally the return. And, of course that's in the majority of function calls: this constant pattern of register save, fill, use, reload, followed by return (another stack pop). Seems like there should be a better way. Maybe instead of a set for every program/thread, you could have just enough space for a few levels deep of registers. This would correspond with the "call stack". Since most calls come with PUSHes, and most returns are proceeded by POPs, it seems like a prime candidate for optimization, for both speed and code size.

Maybe that's still too complex or slow for hardware, though.
Function calls and their associated push/pops are very common, but a large majority of them happen on cold paths:
- Anything that happens on a cold path rarely makes any noticeable speed difference... Even moderately warm paths tend to count a lot less than the hottest paths in overall speed.
- The hottest paths often are loops and tend to either not have any function calls, or have calls that are only occasionally called (error conditions etc), or have calls that can be inlined - which means the compiler can automatically allocate registers and even reorder operations within the parent function, as long as the function is small enough for this to be beneficial.
- Hot loops that do have function calls are often very large and complex loops that are likely to stall for all sorts of other reasons, such as not fitting in the instruction cache, having data cache misses, having pointer-following chains that stall due to data cache latency, having hard-to-predict branches, loading/storing lots of values from objects and so forth, so the function call overhead might not stand out in the wash, or if you're lucky, it could happen while the CPU is waiting after something else (which makes it free).

One option would be register windows like on SPARC, which acts as a hardware assisted stack. Here's an article about that:
http://ieng9.ucsd.edu/~cs30x/sparcstack.html

Another option would be load-dual and store-dual instructions, which let you combine two 64-bit register memory operations into a single 128-bit store (as long as your stack is 128 bit-aligned) and is probably as fast possible for saving/restoring a bunch of registers at the same time (especially if you can dual-issue it to a 2-port data cache).

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Bilinear Interpolation

Post by agner » Thu Nov 02, 2017 1:46 pm

Author: Hubert Lamontagne Date: 2016-10-28 17:38
Bonus question: How well does ForwardCom handle bilinear interpolation?

It's the one algo that I can think of that just doesn't fit well in most CPUs, wastes a bunch of cycles twiddling bits around and generating memory addresses and interpolation factors and applying them across RGBA channels (multiplied by the number of texture layers used), so that GPUs have a big advantage doing this one particular thing. It can also show up in other tasks, such as wavetable synthesis (interpolation on the time axis of the waveform + interpolation between waveforms). On most CPUs like x86 or ARM, you can use vectors to do the interpolation part but you still have to do lots of integer scalar ops to generate the memory addresses. And if the U V coordinates are generated per pixel and can't be linearly interpolated, then having to send the interpolation factors over to the vector unit and scramble them into place can easily wipe out potential gains.

Is there a way to do a particularly fast version of this on ForwardCom?


Author: Agner Date: 2016-10-29 00:48
Hubert Lamontagne wrote:
How well does ForwardCom handle bilinear interpolation?
It depends on how your data are organized into vectors/arrays. It requires a good deal of permutation. ForwardCom has good permutation instructions, but so does other vector instruction sets.


Author: Hubert Lamontagne Date: 2016-10-29 15:31
Agner wrote:
Hubert Lamontagne wrote:
How well does ForwardCom handle bilinear interpolation?
It depends on how your data are organized into vectors/arrays. It requires a good deal of permutation. ForwardCom has good permutation instructions, but so does other vector instruction sets.
Ok, how about the most classic texture organization: single texture, 256x256, 32bpp RGBA (so that's 8 bits per component, can be either RGBA or ABGR order), no swizzle. Can that be bilinear-interpolated efficiently?


Author: Agner Date: 2016-10-30 00:42
I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits.


Author: Hubert Lamontagne Date: 2016-10-30 21:47
Agner wrote:
I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits.
In pseudo-ASM it looks somewhat like this:

Code: Select all

loop:

; generate memory addresses
shr r0, u_coord, 24
shr r1, v_coord, 24
shl r1, 8
add r4, r0, r1
add r2, r0, 1
and r2, 255
add r5, r2, r1
add r3, r1, 256
and r3, 65535
add r6, r0, r3
add r7, r2, r3

; load the 4 texels
mov v0, int32 [texture_adr + r4*4]
mov v1, int32 [texture_adr + r4*5]
mov v2, int32 [texture_adr + r4*6]
mov v3, int32 [texture_adr + r4*7]

; expand components from 8bits to 16bits to avoid wrap-around problems
expand.u8.u16 v0
expand.u8.u16 v1
expand.u8.u16 v2
expand.u8.u16 v3

; generate interpolation factors
shr r8, u_coord, 16
and r8, 255
mov broadcast.u16x4 v4, r8
shr r9, v_coord, 16
and r9, 255
mov broadcast.u16x4 v5, r9

; linear interpolate
sub.s16 v1, v0
sub.s16 v3, v2
mul.s16 v1, v4
mul.s16 v3, v4
shr.s16 v1, 8
shr.s16 v3, 8
add.s16 v1, v0
add.s16 v3, v2
sub.s16 v3, v1
mul.s16 v3, v5
shr.s16 v3, 8
add.s16 v3, v1

; alpha blend
mov v6, u32 [screen_ptr]
expand.u8.u16 v6
mov broadcast.s16x4 v7, v3[3]
sub.s16 v3, v6
mul.s16 v3, v7
shr.s16 v3, 8
add.s16 v3, v6
shrink.u16.u8 v6
mov v6, u32 [screen_ptr]

; increment and loop
add screen_ptr, 4
add u_coord, u_delta
add v_coord, v_delta
cmp r10, screen_ptr, span_end_ptr
jz r10 loop

So the problem I'm seeing is not as much the linear interpolation as having to do ~50 instructions per pixel to do your addressing, interpolation, alpha blending and so forth. I guess you could compute multiple pixels together to soften the blow a bit - which I guess would be good for the interpolation and alpha blending parts, although it doesn't help too much for the addressing part.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

ForwardCom version 1.04

Post by agner » Thu Nov 02, 2017 1:48 pm

Author: Agner Date: 2016-12-08 03:44
Version 1.04 update today:
  • The instruction formats are made more consistent. Template E2 is modified.
  • The masking principle has been changed. Now there is an option to specify a fallback value to use when a mask bit is zero. The fallback value can be zero, or an input operand, or any other value specified by a register. Register r0 and v0 are allowed as masks to facilitate the use of boolean functions that have return values in r0 or v0.
  • The compare instruction has additional features with comparison of absolute values, and optional extra boolean operands.
  • Conditional jumps are modified.
  • Several other instructions are modified.

Author: Matthias Bentrup Date: 2016-12-12 09:42
As the compare_jump_* instructions cover basically all conditional control-flow cases, I guess that the compare instruction will be used mostly to compute mask registers. So I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself.

Reading the condition code from a register may have some novel use cases, but I think that in 99% of all code (especially in compiled code) the condition code is fixed.


Author: Agner Date: 2016-12-12 11:46
Matthias Bentrup wrote:
I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself.
You are right that it would be more compact, but it would break the uniform template system. This would make the hardware more complex and it would also generate an extra complication for the compiler. The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers.


Author: Matthias Bentrup Date: 2016-12-14 03:44
Agner wrote:
[...] The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers.
But unless I am mistaken you need two words if you want to compare a register to an immediate value. There a trade-offs in both directions and I don't think that either way is clearly better than the other one.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Async system calls; horizontal packing instruction

Post by agner » Thu Nov 02, 2017 1:51 pm

Author: Joe Duarte Date: 2016-12-14 06:13
Hi Agner. I've been thinking about some interesting research I've read recently, and it raises a couple of questions for me:

1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern.

I ask because this work on async sys calls is very compelling: https://www.usenix.org/legacy/events/os ... Soares.pdf

2. What do you think of the family of horizontal packing operations that the Parabix team pines for? http://parabix.costar.sfu.ca/wiki/ParabixTransform

(See their section headed "Ideal Implementation Using SIMD Packing" on the above-linked page.)

Would it be wise for ForwardCom to offer such instructions? Why or why not?


Finally, it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.


Author: Agner Date: 2016-12-15 00:15
Thank you for your interesting ideas.

Joe Duarte wrote:
1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern.

I ask because this work on async sys calls is very compelling: https://www.usenix.org/legacy/events/os ... Soares.pdf
System calls in ForwardCom are synchronous because they change access rights and memory map. The switch time will be low because both memory maps can be contained inside the CPU. The "FlexSC" batch processing of system calls proposed in the article you are linking to can be implemented in ForwardCom in the following way: First, you make one normal system call which will share a memory area between application and system. After that, you can use this shared memory area to communicate between the application thread and the system thread. This requies that the application thread and the system thread are running in each their core and that these two cores have easy access to the same cache. There is still a problem of synchronization between the two threads. This would probably require speculative synchronization in order to be more efficient than standard system calls. The system thread might be idle most of the time, and this is a waste of resources.
What do you think of the family of horizontal packing operations that the Parabix team pines for? parabix.costar.sfu.ca/wiki/ParabixTransform

(See their section headed "Ideal Implementation Using SIMD Packing" on the above-linked page.)

Would it be wise for ForwardCom to offer such instructions?
That sounds like a good idea. We may implement an instruction that shuffles the bits in each 64-bit block (b0,b1,b2,...,b63) of a vector register into the order (b0,b8,b16,...,b56,b1,b9,b17,...,b57,b2,b10,b18,...,b58,...,...,b7,b15,b23,...,b63). This can be followed by a permute of the bytes into the eight streams. Do you have any good use cases for the parallel bit streams?
Finally, it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.
Good idea. We could make a comparison overview and put in on www.forwardcom.info or github.com/forwardcom. It may also include comparisons with x86 and ARM, but it would be too much to include all common instruction sets. I can make a draft and let you comment on it and make additions.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Comparison of instruction sets

Post by agner » Thu Nov 02, 2017 1:53 pm

Author: Agner Date: 2016-12-17 03:39
Joe Duarte wrote:
it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.
Now I have made a comparison of instruction sets at http://www.forwardcom.info/comparison.html

Please correct any errors and important omissions.


Author: Joe Duarte Date: 2016-12-28 21:01
Agner wrote:
Joe Duarte wrote:
it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.
Now I have made a comparison of instruction sets at www.forwardcom.info/comparison.html

Please correct any errors and important omissions.
Looks good. Notes:

1. RISC V should be RISC-V.

2. Why 32 registers? I'm distrustful of register counts like 32, 16, etc. – the Suspiciously Convenient Numerals of Computing. I understand why register width and data types need to be 32 or 64 bits, but we shouldn't expect the optimal number of registers to be 32 any more than 31, 29, or 43. The optimal number should be empirically determined.

3. What about virtualization support? Does ForwardCom need it? I'm thinking of stuff like the vmx instructions. And chipset level stuff like VT-d and SR-IOV, which may or may not be out-of-scope for ForwardCom. The word "virtualization" does not appear in the ForwardCom manual, though perhaps ForwardCom's architecture obviates the need for some of the explicit provisions for virtualization in x86.


Author: Agner Date: 2016-12-29 00:56
Joe Duarte wrote:
Why 32 registers? I'm distrustful of register counts like 32, 16, etc. – the Suspiciously Convenient Numerals of Computing. I understand why register width and data types need to be 32 or 64 bits, but we shouldn't expect the optimal number of registers to be 32 any more than 31, 29, or 43. The optimal number should be empirically determined.
The number of registers is a power of 2 because n bits in the instruction code gives 2n possible combinations. If we had 43 registers then we would need 6 bits in each register field and we would waste 21 unused combinations. We don't want unused combinations because code density is important. Some combinations could have special meanings at the cost of a more complicated hardware. ForwardCom has a few special combinations. 31 means the stack pointer in some contexts, and "no register" in other contexts. 29 means data section pointer and 30 means instruction pointer in some addressing modes.

It is good to have many registers because ForwardCom has a mechanism for register allocation across modules so that register spilling can be minimized. This is the reason why we have 32 registers. 64 registers would require three more bits (one for each of the three register fields in the template) so that the instruction wouldn't fit into a 32-bit code word.
What about virtualization support? Does ForwardCom need it?
Good point. I have thought about support for multiple instruction sets. See chapter 10 in the forthcoming version 1.05. A microprocessor with support for multiple instruction sets would ease the transition to a new instruction set. Virtualization would be useful here to produce one virtual machine for each instruction set, at the cost of a little extra hardware.


Author: Hubert Lamontagne Date: 2016-12-30 19:51
Joe Duarte wrote:
2. Why 32 registers? I'm distrustful of register counts like 32, 16, etc. – the Suspiciously Convenient Numerals of Computing. I understand why register width and data types need to be 32 or 64 bits, but we shouldn't expect the optimal number of registers to be 32 any more than 31, 29, or 43. The optimal number should be empirically determined.
Register IDs have to be decoded fast... it's one of the things that determines the number of stages you need between the instruction cache stage and the register rename stage (fewer stages = less cycles penalty on branch misprediction). The fastest way to encode register IDs in instructions is simply to have N bits that are always at the same position in the instruction, which is how you end up with 8, 16 or 32 etc logical registers.

The number of physical registers after rename can vary a lot more though - for instance, Sandy Bridge has 160 integer and 144 floating point physical registers after renaming, and this is a carefully optimized number so that the register file size has the right balance of propagation delay through the chip vs instruction reordering capacity.


Author: Hubert Lamontagne Date: 2017-01-05 16:47
RISC-V is starting to reach silicon. Performance is looking pretty good, for a microcontroller. (comparable to the ARM in a Teensy)

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

ForwardCom version 1.05

Post by agner » Thu Nov 02, 2017 1:54 pm

Author: Agner Date: 2017-01-22 04:39
Changes in version 1.05:
  • Systematic description of all instructions.
  • Added chapter: Support for multiple instruction sets.
  • Added chapter: Software optimization guidelines.
  • Bit manipulation instructions improved.
  • Shift instructions can multiply float by power of 2.
  • Integer division with different rounding modes.

Author: Jiří Moravec Date: 2017-01-23 11:58
Hi. In table 3.16 on page 24 is probably error. In D template is only 3bit length field OP1, so OP can't be 0-15, but just only 0-7 in "1.5 D".
Same problem is in table 3.17. Unconditional jump/call has OPJ=0-7 and 8-15 in 3bit field. ???

And another question: How do you discriminate between jump instructions in format 1.5C and 1.5D format?
IL and Mode fields are same. And if whole 3bit OP1 field in 1.5D format was used, there are no other bits for different encoding 1.5C!


Author: Agner Date: 2017-01-24 00:36
Jiří Moravec wrote:
Unconditional jump/call has OPJ=0-7 and 8-15 in 3bit field. ???
The lower 3 bits of the 6-bit OP1 field are occupied by part of the 24-bit address in the 1.5 D format. The 24-bit jump has part of the address in the lower 3 bits of OP1 and 000 in the upper 3 bits. The 6-bit OP1 field will then be 0-7. Likewise, the 24-bit call has part of the address in the lower 3 bits of OP1 and 001 in the upper 3 bits. The 6-bit OP1 field will then be 8-15.

Other instructions cannot have format 1.5 if OP1 is less than 16. The instructions that are missing for this reason are subtract constant and jump conditionally. These instructions can be replaced by the corresponding add constant and jump conditionally by using the negative constant. So the rule is that format 1.5 D is used when OP1 < 16 and format 1.5 C is used otherwise.

I organized the puzzle in this way in order to squeeze jumps and calls with 24-bit relative address into the template system. The 24-bit offset is scaled by 4 so that we can cover an address range of +/- 32 Megabytes which is sufficient for most most programs. Unconditional jumps and calls are so frequent that it is nice to have them in single-size instructions.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Syscall/ISR acceleration

Post by agner » Thu Nov 02, 2017 1:56 pm

Author: Jonathan Brandmeyer Date: 2017-01-22 23:22
There are a few features of ARM Cortex-M (the microcontroller profile) that I miss when writing system-level programs in other architectures, including Cortex-A/R:

- Lack of a control register space. In Cortex-M, the processor's control registers are all placed in a per-core region of memory at a reserved physical address. Even the memory protection unit's control registers are saved up there. You don't have to muck around with coprocessor control instructions to manage the caches, memory protection, etc: Its all as simple as addressing into the members of a struct based at some fixed address. I believe, but cannot prove, that this makes visualization easier, since the next level down in the supervisor hierarchy can just map the mocked virtual peripheral to the reserved hardware address.

- Automatic stacking of registers. A major design change in Cortex-M versus ARM7** was that instead of having banked registers for handler mode (a system mode equivalent), the core automatically saved the call-clobbered registers to the stack of the thread mode (user mode equiv) stack pointer. It also saves the link register to a value which points into the system reserved memory space that has the meaning of "return from interrupt". Obvious advantage #1: interrupt handlers can be written in plain ordinary C. Slightly less obvious advantage #2 is that if the processor proceeds to execute another interrupt immediately after the first (say, because the first triggered an OS scheduling event, for example), the user mode stack pointer doesn't need to be saved. Similarly, when performing a context switch between different user-mode threads, only the OS scheduling interrupt must save the rest of the user-mode's thread to the their stack frame. Obvious disadvantage is that the ABI is partially baked into the hardware with respect to the call-clobbered registers and stack pointer.

On a SkyLake-class OoO machine, I suspect that you can execute loads to call-clobbered registers in parallel with the writes of those registers, too, as the ISR goes through the dispatch process.

- Interrupt vectoring. Each peripheral interrupt source gets its own handler instruction assigned to it. This makes interrupt dispatch to the particular driver especially easy. I think this can be taken one step further, where the handler can get an argument or few passed to it. This makes the signature of the interrupt handler something similar to that of a closure in c: "void specific_isr(void *os_provided_data, int peripheral_provided_data)" So long as the function address + arguments are less than the size of cache line, I think it shouldn't cost too much even on a big OoO machine. The interrupt controller can be 'far' away from the core it is dispatching to, passing a message the size of a cache line or less to trigger the interrupt. Now we are also baking in a few argument-passing registers, too.

My professional experience is mostly on microcontroller-class hardware, and having interrupt dispatch as fast as a function call is rather pampering. It seems from the outside that the generally poor performance of interrupt handling on larger processors has lead to a series of design attempts to avoid it, some going as far as polling (!) the device instead of taking an interrupt at the time of demand. My general assumption has been that the cost mostly comes from blowing the dcache and icache for even trivial interrupt responses just due to the branchy/pointer-chasey dispatch logic itself. Just having the peripheral interrupt controller perform the vectoring for the operating system would drastically reduce the data-cache pressure incurred by forcing the ISR to perform all of the pointer-chasing dispatch logic on its own, if that underlying assumption is true.


** ARM7 aka ARM7TDMI is actually ARMv4, while Cortex-M is ARMv7. Except for Cortex-M2x which is ARMv8. Sigh.


Author: Agner Date: 2017-01-23 00:05
Jonathan Brandmeyer wrote:
There are a few features of ARM Cortex-M (the microcontroller profile) that I miss when writing system-level programs in other architectures
Thanks for the input. The system instructions of ForwardCom are not developed yet. The idea at this point is that there are two extra sets of scratch registers, one for device drivers and one for system core. A device driver can simply save the user mode registers to its scratch registers rather than to the stack in order to avoid polluting the data cache. I would prefer to save the registers one by one, as needed, to avoid complex micro-coded instructions.

Data stack and call stack are preferably separate. The call stack can stay on-chip most of the time. There may be a separate call stack for device drivers and system code. We can keep the system call stack on-chip by banning recursive function calls in device drivers. This would make it possible to make a device driver that doesn't even touch the data cache unless it needs to do a lot of data processing. The device driver has access to a limited block of the application program's data space which it is likely to have good caching. I don't know how an interrupt handler gets its memory. Maybe the application program can call a device driver to enable the interrupt handler and assign some of the application's memory space to it.

I have not decided how interrupts are dispatched, but we may have the interrupt vector table stored on-chip.


Author: Jonathan Brandmeyer Date: 2017-01-25 07:50
Agner wrote:
The idea at this point is that there are two extra sets of scratch registers, one for device drivers and one for system core.
One of the gotchas from ARM banked registers is that the FIQ register state was retained across ISR return and reentry. In effect, the core has additional named architectural registers that are almost never actually accessible at any given time, yet still consume space in the physical register file. If the IRQ and System scratch registers are instead defined to be zeroed on IRQ or syscall entry, then the underlying physical registers can be freed by the rename engine on exception return, and reallocated automatically on first use after exception entry.


Author: Agner Date: 2017-01-25 13:46
Jonathan Brandmeyer wrote:
One of the gotchas from ARM banked registers is that the FIQ register state was retained across ISR return and reentry.
We probably have to clear the registers for security reasons.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

Jump prefetch?

Post by agner » Thu Nov 02, 2017 2:03 pm

Author: csdt Date: 2017-01-27 10:21
Branch predictors is currently a hard part of a CPU.
I propose to help it a bit in the same way we can help the cache: tell it in advance where we want to jump.

I think this can be done by a couple of instructions:
- chose the destination from 2 immediates based on a condition
- unconditionally set the destination of a jump from a register
- unconditionally jump to the previously set destination

This can help basic control flow if the program is not limited by the instruction bandwidth with the second instruction.
And it can also help indirect calls (virtual call through a vtable for instance).

For safety reason, this "special register" should be saved and restored during context switches.
For security reason, the "special register" should be reset after each jump whatever instruction is used for the jump, and a signal should be send if the third instruction is executed while the "special register" is not in a valid state.

What do you think of this ?


Author: Agner Date: 2017-01-27 11:45
csdt wrote:
Branch predictors is currently a hard part of a CPU.
I propose to help it a bit in the same way we can help the cache: tell it in advance where we want to jump.
The hard part of branch prediction is not to predict the target address, but to predict whether it will jump or not. The target address can be calculated at an early stage in the pipeline for direct conditional and unconditional jumps. Certain instruction formats are reserved for jump instructions so that these instructions can be identified easily. The earlier in the pipeline we can calculate the target address, the less is the penalty for not knowing the address of the next instruction, and the less is the need for large branch target buffers (BTB).

It is much harder to predict whether it will jump or not. A good predictor contains large pattern tables that consume a lot of silicon space.

We can reduce the branch misprediction penalty by having a short pipeline and perhaps by fetching and decoding both targets of a two-way branch.

Multiway branches are even harder to predict, but we cannot do without them.

Function returns are easy to predict. If we have separate data stack and call stack, then the call stack will usually be so small that it can reside on the chip.


Author: csdt Date: 2017-01-30 05:06
The point of this solution is to replace conditional jumps by unconditional jumps to predefined targets.

It is always possible to consider a conditional jump like an unconditional jump where the target is either the one we want, or the instruction just after the jump depending on the condition.

Doing this, we can use unconditional jumps even with branches and loops, with a predefined address (set with a previous instruction).
It is possible to have no-latency branches and loops (with no need to prediction), but requires an extra instruction per jump in the instruction flow.


Author: Agner Date: 2017-01-30 07:14
csdt wrote:
The point of this solution is to replace conditional jumps by unconditional jumps to predefined targets.
It is not very clear what you mean here. It is possible, of course, to put the jump target address in a register and then do an indirect jump to the register value. This doesn't solve the problem in current hardware designs because the value of the target register is not known at the time the next instruction has to be fetched into the pipeline. You would need to store the target address several clock cycles in advance and then have a mechanism for signalling to the instruction fetcher at what point it would have to fetch from the new address. The compiler then has to place the switch-target instruction at the right position in the instruction sequence. The distance from the switch-target instruction to the point of divergence has to fit the length of the pipeline, which depends on the specific hardware implementation. This may be possible, but I am not aware of any attempt to construct such a machinery.
It is always possible to consider a conditional jump like an unconditional jump where the target is either the one we want, or the instruction just after the jump depending on the condition.
Some instruction sets have an instruction for conditionally skipping the next instruction. If the instruction that you skip is not a jump instruction, then this is in effect a conditional execution of the instruction. ForwardCom can execute instructions conditionally by using a predicate or mask. This avoids misprediction, but it doesn't save time because the time it takes to not-execute an instruction is the same as the time it takes to execute it. The reason for this is that the value of the predicate register is not known at the time the instruction is scheduled. It might be possible to construct a scheduler that checks whether the value of the predicate register is known sufficiently early to skip the instruction completely and not schedule it for execution. This would require a quite complicated hardware design, and I don't think it has ever been done.

If you have a system that conditionally skips one instruction, and the instruction you skip is a jump instruction, then you have a situation that is equivalent to a conditional jump. The instruction fetcher will not know in advance which instruction comes next.

This all boils down to the issue of pipelining. Current high-end microprocessors have a pipeline of typically 10-20 stages. This means that it has to know 10-20 clock cycles in advance which instruction to fecth next. If this information is not actually known 10-20 clock cycles in advance then you cannot avoid a waste of time if you make the wrong guess, no matter how you implement the branching. Any mechanism that makes this information known to the instruction fetcher sufficiently early would be quite complicated to implement both in the hardware and in the compiler.


Author: csdt Date: 2017-01-30 09:53
Agner wrote:
csdt wrote:
The point of this solution is to replace conditional jumps by unconditional jumps to predefined targets.
It is not very clear what you mean here. It is possible, of course, to put the jump target address in a register and then do an indirect jump to the register value. This doesn't solve the problem in current hardware designs because the value of the target register is not known at the time the next instruction has to be fetched into the pipeline. You would need to store the target address several clock cycles in advance and then have a mechanism for signalling to the instruction fetcher at what point it would have to fetch from the new address. The compiler then has to place the switch-target instruction at the right position in the instruction sequence. The distance from the switch-target instruction to the point of divergence has to fit the length of the pipeline, which depends on the specific hardware implementation. This may be possible, but I am not aware of any attempt to construct such a machinery.
The idea is not to help the prediction, but to "override" the prediction for a single given jump instruction. So there is no failed prediction in the same way as you cannot "mispredict" an unconditional jump to a fixed target.
Let me give you a few examples (pseudo-asm syntax).

The 3 instructions are:
- set_jmp target
- set_jmp cond target0 target1 (set target to "target0" if "cond" is 0 and to "target1" otherwise)
- jmp2target

Here is a simple loop:

(prolog)
loop:
%r01 = sub.i32 %r01 1
set_jmp %r01 loop_end loop
(loop body)
jmp2target
loop_end:
(epilog)

So basically, the jump is always taken, and you know exactly the address of the jump in advance.
Of course, you need to know when load the instructions of the target address, but this can be handled by the branch predictor like a regular unconditional jump to a fixed address.

The same applies to if conditions:

%01 = (compute the condition)
set_jmp %r01 if_end if_then
(some unrelated computation)
jmp2target
if_then:
(conditional computation)
if_end:
(unconditional computation)

Of course, if you set the target too early, you need to wait, so you need to execute the "set_jmp" early enough, but the same problem arises with data prefetching anyway.


Author: Agner Date: 2017-01-31 01:40
csdt wrote:
The 3 instructions are:
- set_jmp target
- set_jmp cond target0 target1 (set target to "target0" if "cond" is 0 and to "target1" otherwise)
- jmp2target

Here is a simple loop:

(prolog)
loop:
%r01 = sub.i32 %r01 1
set_jmp %r01 loop_end loop
(loop body)
jmp2target
loop_end:
(epilog)

So basically, the jump is always taken, and you know exactly the address of the jump in advance.
OK. Now I think I understand you.

Your idea is somewhat like an extension to a "delay slot". Some old RISC architectures (MIPS, SPARC, OpenRisc) have a delay slot of one or two instructions after a jump. The instructions in the delay slot are executed before the jump is executed. The main problem with these delay slots is that they are optimized for a particular hardware implementation. A new implementation with a longer pipeline and more parallelism will need a longer delay slot.

This makes me wonder if it would be useful to have a variable-length delay slot. A branch instruction could have an 8-bit field indicating a delay slot of 0-255 instruction words. The compiler will place the delayed branch instruction as early in the code as it can. The instruction fetcher in the CPU will then know in advance what to fetch after the branching point. If the fetcher has already fetched past the branching point at the time the delayed branch is executed then it has to discard the extra instructions and fetch again. This will give a bubble in the pipeline, but this bubble will be smaller the longer the delay slot is.

There are known problems with delay slots: Interrupts, debug breakpoints, and jumps in the delay slot lead to extra complications.

We might overcome these problems by making the delayed jump instruction a "prediction hint" to be confirmed later by a regular branch instruction at the actual branching point. Only in the rare case that the prediction hint is lost or compromized due to interrupts or whatever will we have a misprediction at the branch point.

The delay slot has to be quite long. Modern microprocessors can execute 3 or 4 instructions per clock cycle and have a pipeline of 10-20 stages. With 3 instructions per clock on average and a 10-stage pipeline we would need 30 instructions to fill the delay slot. If no other jump instructions are allowed in the delay slot then we would rarely be able to make a sufficiently long delay slot.

Now, back to your proposal. I understand that you want to have a separate register for jump targets only. We would have to set this jump target register some 30 instructions in advance in order to keep the pipeline full. I think it is unlikely that we have no other branch instructions in this 30 instruction space, so we need more than one jump target register. If we need many jump target registers, then we might as well use the general purpose registers for jump targets instead. So, now your set-jump-target instruction just becomes ordinary instructions that calculate a jump target and put it into a register. And your jump-to-target instruction becomes a register-indirect jump. Now the problem becomes how to make the register value available to the branch prediction front end of a superscalar processor. The instructions that set the jump target are executed in the out-of-order back end, while the branch predictor operates in the in-order front end.

This makes me think, is it possible to execute simple instructions already in the in-order front end of a superscalar processor? If we can execute the instructions that calculate the jump target already in the in-order front end, then we have reduced the necessary delay to a few clock cycles. I am imagining a dual execution system: We have a system of in-order ALUs and out-of-order ALUs. Simple instructions are executed in the in-order ALUs if the operands are available already while the instruction is decoded in the in-order front ind. If the operands are not available yet, then the instruction is passed on to the out-of-order back end.

Here, we need a system to track all the general purpose registers to see if they are ready. We can have a "register cache" in the in-order front end. Every time the decoder sees an instruction that writes to a register, it will mark the entry for this register as dirty and remember which instruction it was. When this instruction is later executed (in order or out of order) the value is written to the register cache and marked as clean (unless another instruction writing to the same register has been encountered in the meantime). Simple instructions can execute in the in-order system if all their input operands are ready. The result is stored both in the register cache and passed on to the register renamer/allocater.

Branch instructions, jumps, calls, etc. should execute in the in-order front end. The compiler should make sure that any registers that the branch depends on are calculated as early as possible, and preferably using simple instructions that can execute in the in-order front end. If a branch depends on a memory operand, such as a jump table or table of function pointers, then the compiler should load the memory operand into a register as early as possible and then make a register-indirect jump.

A simple loop with a loop counter could benefit from this. The increment and compare instructions for the loop counter will be simple instructions that are likely to execute in the in-order front end so that the values are available early.

Multiway branches are particularly difficult to predict, and they could benefit from a system that calculates the target address in advance. For example, a byte code interpreter typically loads one byte at a time and uses this code as index into a multiway jump table. The software can easily read the next byte code in advance, fetch the target from the jump table and store it in a register.

This is an interesting discussion. The software may be able to calculate branch targets early in many cases, but the intricacies of out-of-order execution makes it complicated to forward this information to the in-order front end. Whatever solution we may come up with might be complicated, but still use less silicon space then the complicated branch prediction mechanisms that are used in modern microprocessors today.


Author: csdt Date: 2017-01-31 04:31
We now understand each other ;)

The solution I propose shares some features with a delay slot, but solves the main issue: the delay slot is implementation-dependent.
In my solution, if you set the target too late, you will just wait and create a bubble in the pipeline, but the CPU will execute what you expect.

Agner wrote:
There are known problems with delay slots: Interrupts, debug breakpoints, and jumps in the delay slot lead to extra complications.
For interruptions, the target must be stored and loaded back at the end of the interruption. (You will probably lose the benefit of setting target early, but no undefined behavior here)
For jumps in the "delay slot" (between the target set and the actual jump), my first idea was to forbid this, and when this happens, trap when trying to jump with "jmp2target".
The way to solve this was to have a flag in the jump register indicating an incorrect value. If we try to jump with this flag -> trap.
This flag is set to 1 by default and reset to 1 every time a jump happens. This flag is then set to 0 when we define a target with "set_jmp".
Of course, these are just ideas and are adjustable.

Agner wrote:
Multiway branches are particularly difficult to predict, and they could benefit from a system that calculates the target address in advance. For example, a byte code interpreter typically loads one byte at a time and uses this code as index into a multiway jump table. The software can easily read the next byte code in advance, fetch the target from the jump table and store it in a register.
That was exactly that kind of code that makes me start this reflection on the subject (with vtables that are basically a jump table).

Now, I should mention that I'm not a hardware guy, but I had advanced Computer Architecture courses. (Basically, I know how it behaves, but I don't know how to implement it).

I think it would be better to have separate registers, that can live in the front-end world, and the "set_jmp" instruction is the interface between back-end and front-end worlds.
And in order to know if there is "set_jmp" being executed when the front-end has to fetch the "after-jump" instruction, we can have a flag in the special registers set by the front-end and telling an update of this register is pending.
In this case, you have to wait the register update by the back-end. In order to speed-up the thing further, the back-end could try to execute this instruction in priority.

Maybe you could add regular branch prediction when you have to wait. You will still benefit from the knowledge of the jump target by avoiding to flush the whole pipeline (only a part of it), but this could be much more complicated (I have no clue on this point).
I think it depends if you take this approach as the only way to branch, or if it is a complementary mechanism to branch.

I wonder about having multiple targets at the same time. It would be very handy for nested loops and nested control-flow in general, but some security issues arise.
For instance, with only one jump target, it is easy to detect a jump between "set_jmp" and "jmp2target" (see the beginning of this post). But with several of them, it becomes hard to detect without removing the benefit of having multiple jump targets.

I hope my explanations are understandable enough.


Author: Hubert Lamontagne Date: 2017-02-01 01:54
Agner wrote:
[...]
This makes me think, is it possible to execute simple instructions already in the in-order front end of a superscalar processor? If we can execute the instructions that calculate the jump target already in the in-order front end, then we have reduced the necessary delay to a few clock cycles. I am imagining a dual execution system: We have a system of in-order ALUs and out-of-order ALUs. Simple instructions are executed in the in-order ALUs if the operands are available already while the instruction is decoded in the in-order front ind.
That is sounding a lot like a Decoupled Access Execute architecture. Some documents about this:
https://cseweb.ucsd.edu/classes/wi09/cs ... oupled.pdf
http://personals.ac.upc.edu/jmanel/pape ... der CPUs)
http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf
http://spirit.cs.ucdavis.edu/pubs/conf/dynamicarch.pdf

One general problem with that kind of architecture is that it's hard to keep them simple, and they tend to end up with designs that are more complex than the simpler OOO cpus (in which case it's hard to argue against going with the OOO design).
If the operands are not available yet, then the instruction is passed on to the out-of-order back end.

Here, we need a system to track all the general purpose registers to see if they are ready.
A lot of simpler Out-Of-Order CPUs do something like this for memory address operands... For instance, the classic Athlon initiates memory loads and stores in-order. Due to this, instructions that generate memory addresses and don't depend on loads/stores will generally run quite earlier in the execution stream, and the readiness of the operands gets tracked by the scheduler. Maybe you could add a priority flag to instructions that affect control flow so that the scheduler will run them early (and possibly instructions that affect load/store addresses too).
We can have a "register cache" in the in-order front end. Every time the decoder sees an instruction that writes to a register, it will mark the entry for this register as dirty and remember which instruction it was. When this instruction is later executed (in order or out of order) the value is written to the register cache and marked as clean (unless another instruction writing to the same register has been encountered in the meantime). Simple instructions can execute in the in-order system if all their input operands are ready. The result is stored both in the register cache and passed on to the register renamer/allocater.

Branch instructions, jumps, calls, etc. should execute in the in-order front end. The compiler should make sure that any registers that the branch depends on are calculated as early as possible, and preferably using simple instructions that can execute in the in-order front end. If a branch depends on a memory operand, such as a jump table or table of function pointers, then the compiler should load the memory operand into a register as early as possible and then make a register-indirect jump.

A simple loop with a loop counter could benefit from this. The increment and compare instructions for the loop counter will be simple instructions that are likely to execute in the in-order front end so that the values are available early.

Multiway branches are particularly difficult to predict, and they could benefit from a system that calculates the target address in advance. For example, a byte code interpreter typically loads one byte at a time and uses this code as index into a multiway jump table. The software can easily read the next byte code in advance, fetch the target from the jump table and store it in a register.

This is an interesting discussion. The software may be able to calculate branch targets early in many cases, but the intricacies of out-of-order execution makes it complicated to forward this information to the in-order front end. Whatever solution we may come up with might be complicated, but still use less silicon space then the complicated branch prediction mechanisms that are used in modern microprocessors today.
Afaik, the case that really kills is this: Somewhat predictable branches that depend on memory operands (especially rarely used memory values that are loaded late and come in from L2 or later).

In C++ code this shows up as checks on rarely used or rarely changed flags to trigger rare special cases - for instance, logging errors on exceptional cases. This leaves yo with jumps that you're 99.99% sure will never be taken but will still take dozens of cycles to retire. I'm not sure I've seen any other solution to this than speculation and branch prediction (aside from the Transmeta Crusoe's strange backup-restore of the whole CPU state and Itanium's "insert omniscient compiler here" scheme)... and this mess has a kinda massive cost as far as I can tell (need to backup the result of every single instruction going through the pipeline, need large register file, need aggressive register renaming due to having so many renames per cycle).


Author: Agner Date: 2017-02-01 03:12
Hubert Lamontagne wrote:
That is sounding a lot like a Decoupled Access Execute architecture.
Thanks a lot for the links. Yes, this resembles what I am thinking about. The papers you are referring to are from the 1990's. I found a few newer articles on the topic, but no real progress:
http://www.jarnau.site.ac.upc.edu/isca12.pdf
http://www.diva-portal.org/smash/get/di ... FULLTEXT02

These articles still don't solve the problem of how to split the instruction stream into a control stream and a compute stream. This makes me think of the following idea. Let the compiler insert an instruction telling which registers are part of the control stream. For example, if register r1 is used as a loop counter, and r2 is controlling a branch, then tell the microprocessor that r1 and r2 should be privileged. Any instruction that has r1 or r2 as destination will go into the control stream. These instructions, as well as all branches, jumps and calls will be executed in the in-order front end as far as possible.

The front end should have access to the permanent register file. The back end may have out-of-order execution with register renaming. The renamed registers will be written to the permanent register file when they retire. We have to keep track of whether the permanent registers are up to date. When the decoder sees an instruction that writes to a register, it will mark this register as invalid in the permanent register file and remember which instruction it belongs to. When this instruction is executed (whether in the front end or the back end), the register entry is marked as valid, unless another instruction writing to the same register has been seen in the meantime.

The main problem we are trying to solve with this design is that the very long pipeline of superscalar processors is bad for branch instructions. The solution with a branch target register puts the branch in the front end, but not the calculation of the branch target. We would need perhaps 30 or 50 instructions between the calculation of the branch target and the branch itself in order to keep the pipeline full. To reduce this distance, we need to also put the calculation of the branch target, and everything that the branch depends on, in the front end as well. This includes not only branch targets, but also loop counters, branch conditions, indexes into jump tables, and all preceding calculations that these depend on. A separate set of registers and instructions for all this would be too complicated. I think it is easier to use the existing registers and tell the microprocessor which registers are part of the control stream. This wouldn't be too hard to implement in a compiler, I think. The question is how efficient we can make it in the front end.


Author: Hubert Lamontagne Date: 2017-02-01 17:21
There are two different ways to select values that can be sent to back-end:
- Value is never used in any memory address calculation
- Value is never used in any branch address calculation

They are actually somewhat orthogonal:
- A CPU can have a front-end that runs in-order but has register renaming and can be rolled back, which allows conditionals on the back-end but not memory index calculations
- A CPU can have a front-end that supports reordering but not speculative execution (all writebacks must be real!), allowing back-end indexing but not conditionals

Non-indexing and non-conditional values can be marked in SSA form: starting from the bottom, any value that gets "garbage collected" (goes out of scope, runs out of destination operations) without ever going to a memory load/store address or branch/jump-address can be marked, which in turn allows marking preceding values as non-indexing and/or non-conditional. The proportion of non-indexing and non-conditional values depends on the program (the test I did ended up with about 80% non-indexing values and 60% non-conditional values but this might not be typical at all). It might be very hard to perform this kind of marking through function calls or complex control flows though.


Author: Agner Date: 2017-02-02 01:11
Hubert Lamontagne wrote:
They are actually somewhat orthogonal:
- A CPU can have a front-end that runs in-order but has register renaming and can be rolled back, which allows conditionals on the back-end but not memory index calculations
- A CPU can have a front-end that supports reordering but not speculative execution (all writebacks must be real!), allowing back-end indexing but not conditionals
My intention is clearly number 2. The purpose is to shorten the branch misprediction delay to probably 3 clock cycles: fetch, decode, execute. The front end may allow reordering in the sense that an instruction can wait for an operand without delaying subsequent independent instructions. But it should not have register renaming in the front end.

I am focusing on branching here, which is clearly a weak spot in current superscalar designs. I am not intending to use this as a way of prefetching memory operands. The out-of-order mechanism is able to handle delayed memory operands satisfactorily.

My vision is this: The compiler is indicating which registers it intends to use for control flow. For example, it may use register r1 for loop counter, r2 for a branch condition, and r3 for index into a jump table. All instructions that modify r1, r2, and r3 will execute in the front end if possible. The remaining instructions are sent to the the back end. Instructions that have any of these registers as input operand will have the register operand replaced by its value before it is sent to the back end. Other register operands that are available in the permanent register file can likewise be replaced by their values. Instructions that execute in the front end may write their result not only to the permanent register file but also to any pending instruction that needs it.

The result of this mechanism is that a loop that is controlled by the marked registers only will be unrolled in the front end. The instructions in the loop body will be passed on to the back end with the loop counter replaced by its value. The back end has register renaming so that it can execute the body of the second iteration before it has finished the body of the first iteration.

All other control flow, such as branches, jumps, calls, returns, etc., will be unrolled in the front end as well so that control flow is effectively resolved before execution as far as it does not depend on delayed data.

The demand for advanced branch prediction is somewhat relaxed if we can drastically reduce the branch misprediction delay in this way. The branch prediction machinery in modern processors is very complicated and requires a lot of silicon space for branch target buffers and pattern/history tables. We can cut down on this and use a simpler branch prediction scheme if branches are resolved in the front end.

The front end may fetch and decode speculatively, but not execute speculatively. It may even fetch both sides of a branch that it expects to have poor prediction. Fetching both sides of a branch becomes cheaper here than with a traditional superscalar design because it only needs to duplicate two pipeline stages (fetch and decode) in order to minimize branch misprediction bubbles.

The fetch and decode stages should have higher throughput than the rest of the pipeline so that it can catch up and fill the pipeline after a branch bubble. The fetch stage should be able to fetch a large block of code in one clock cycle, possibly crossing a cache line boundary. The decoder should be able to decode the first one or two instructions in a fecthed code block in the first clock cycle, and determine the starting points of the remaining instructions. This will allow it to decode maybe four or more instructions in the second clock cycle after fetching a block of code. The aim of this is to keep the branch misprediction delay as low as possible.

In case the front end ALU is idle because there are no instructions that write to marked registers, it may execute any other instructions if their operands are ready.


Author: Agner Date: 2017-02-14 13:20
This idea about decoupling the control flow from execution is now included as a proposal in the manual version 1.06 (chapter 8.1).

http://www.agner.org/optimize/forwardcom.pdf


Author: Hubert Lamontagne Date: 2017-01-31 01:55
csdt wrote:
[...]
The idea is not to help the prediction, but to "override" the prediction for a single given jump instruction. So there is no failed prediction in the same way as you cannot "mispredict" an unconditional jump to a fixed target.
Let me give you a few examples (pseudo-asm syntax).

The 3 instructions are:
- set_jmp target
- set_jmp cond target0 target1 (set target to "target0" if "cond" is 0 and to "target1" otherwise)
- jmp2target
[...]

So basically, the jump is always taken, and you know exactly the address of the jump in advance. Of course, you need to know when load the instructions of the target address, but this can be handled by the branch predictor like a regular unconditional jump to a fixed address.
[...]
You'd need to compute the conditional at minimum ~10 instructions before the jump, even in the best case on the shortest pipelines (and with a bypass path straight from the register bypass network to instruction cache address input!). For instance, with a 3-instruction-per-cycle cpu:

set_jmp %r01 loop_end loop
nop ; loads on same cycle as the jump, conditional execution impossible
nop ; loads on same cycle as the jump, conditional execution impossible
nop ; starts loading while the jump is coming out of instruction cache
nop ; starts loading while the jump is coming out of instruction cache
nop ; starts loading while the jump is coming out of instruction cache
nop ; starts loading while the instruction aligner forms a 3 instruction group
nop ; starts loading while the instruction aligner forms a 3 instruction group
nop ; starts loading while the instruction aligner forms a 3 instruction group
nop ; starts loading while the register renamer is mapping %r01
nop ; starts loading while the register renamer is mapping %r01
nop ; starts loading while the register renamer is mapping %r01
jmp2target ; earliest instruction that could execute conditionally in the MOST optimistic case imaginable (but real CPUs have way many more front-end latency cycles)


If the conditional value doesn't depend on memory, and it doesn't depend on results very low down the computation chain, then this could theoretically work - for instance this could probably work on loop indexing (if LLVM can hoist the conditional all the way up the SSA), especially if the branch predictor serves as a fallback.

I guess this could also sorta make sense for floating point or SIMD conditional branches (which often have scheduler penalties and slow value hoisting paths).

Also, you don't need to calculate the branch target early (that can be supplied by the branch target predictor), only the conditional value. So maybe instead of a special jump, you could simply do this by letting the branch predictor have some special port that can read flag values early, which means that it could get 100% prediction if the flag values have been "cold" for long enough when it reaches the jump.

agner
Site Admin
Posts: 58
Joined: Sun Oct 15, 2017 8:07 am
Contact:

High precision arithmetic

Post by agner » Thu Nov 02, 2017 2:05 pm

Author: fanoI Date: 2017-03-21 03:41
Agner the code to do an addition between "big integers" that is at page 85 of the forwardcom manual is applicable to any CPU that can do vector integers addition? It could work with x86 using PADDB?

This idea is applicable to all other operators (-, *, /, &...) right?


Author: Agner Date: 2017-03-21 04:16
fanoI wrote:
Agner the code to do an addition between "big integers" that is at page 85 of the forwardcom manual is applicable to any CPU that can do vector integers addition?
Yes
It could work with x86 using PADDB?
Better use VPADDQ
This idea is applicable to all other operators (-, *, /, &...) right?
Only + and -.
* and / are more complicated. & is straightforward because it has no carry.

Locked