The original proposal, 2 years ago

discussion of forwardcom instruction set and corresponding hardware and software

Moderator: agner

agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Do we need instructions with two outputs?

Post by agner »

Author: Agner Date: 2016-03-31 01:55
An alternative to two register outputs for add-with-carry is to have extra carry bits in the register. We could, for example, have one extra flag bit for each 32 bits of vector registers. The extra bit can be used for carry, overflow, masks, etc. In this way we can implement add-with-carry with just two inputs and one output. Nobody needs add-with-carry for 8-bit and 16-bit integers. Maybe it is worth the cost of having 3% more bits of vector storage if we can avoid instructions with two outputs? But now we have a problem with saving the vector to memory. We will need two different save instructions - with and without the extra flag bits. This will cause complications in the compiler. And, do we need an extra bit on integer registers as well for the sake of orthogonality?

We can live without pop instructions. If we can handle the problems of an extra flag bit, then the only need for instructions with two outputs is read-and-increment-pointer. Without this, how do we save all registers in a task switch in an efficient and compact way? The extra bits will make it still more complicated to save registers for task switch. Any suggestions?

BTW, if we can accept extra bits in a vector register, we can also have extra bits for vector length. This will remove the input dependency on a vector-length register.


Author: Hubert Lamontagne Date: 2016-04-01 02:46
I've thought about the "assigning a flag register for each register" thing a bit, and its problem is not only that saving/restoring by the OS on interrupts, but also that it makes all callee-saved registers in the calling convention double-expensive - because the leaf function not only has to save the register value, but also its matching flags. You get a conflict between the principle that "flags aren't saved on function calls" (because they get wiped too easily and you pretty much never want to preserve them anyways) and the principle that "flags should have the same calling convention as their matching registers" (since they presumably come from the same register rename and update together). It also does all sorts of weird things like change 'nops' into different operations (mov r0, r0; add r0, #0; and r0, #-1; and the st r0, [sp+0] + ld r0, [sp+0] sequence all become different due to flag effects). Mov preserves flags (so that you can make 'mov' happen 'for free' by doing it at the register rename stage) but mov immediate doesn't. This is bad for late compiler optimization passes (because it has to take flags into account). So I think having flags be a part of value registers creates more problems than it solves.

If Bignums are really important and we need fast ADC for stuff like RSA encryption, I suggest we should make ADC/SBC vector-unit only, and have specific flags registers attached to the vector unit (to reduce the number of dependency paths from the SIMD unit to the main unit). Also, I'd separate the flags that are a result of the SIMD operations (ie carry) from the flags that control SIMD operations (zeroing, denormal-zeroing, etc), so that SIMD operations that update flags can simply wipe the previous value - Partial flag register updates are bad since it requires separate flag rename engine for every part that doesn't update together!

The exception to this is vector length (for variable vector length), which has to be on the integer-scalar unit because it has to be known by load/store instructions.

-

Ideally, it's best if SIMD instructions cannot cause interrupts and can't affect program flow. For most architectures, SIMD operations execute on a different unit with a different issue queue, so it's best if non-SIMD and SIMD operations can be separated as fast and easily as possible - basically right after determining instruction size, since operations go to different queues and compete for different register ports and so forth. In theory, you could even design a cpu with a separate IP and instruction cache for the SIMD unit, and do all SIMD loads/stores through a queue (the PS3's Cell is sorta like this, in a way).

For instance, on the ARM Cortex A8, NEON/FPU instructions literally CANNOT cause interrupts since the non-SIMD instruction results don't even commit at the same time, so SIMD instructions and every subsequent instruction have to be 100% sure to run (because the result of subsequent non-SIMD has already been committed so it cannot be undone). The benefit of this is that the non-SIMD commit unit doesn't even have to know that the SIMD unit even exists except for receiving store values in a queue, and the SIMD unit likewise knows nothing about the non-SIMD unit except that instructions and values loaded from memory and forwarded from GPRs arrive in a queue and that store values go in a queue.

X86 enforces this less strongly (so that committing has to be synchronized between the general-purpose unit and the SIMD unit) but even then, there's a reason why, on the Athlon, COMISS (SSE float compare and update main unit flags register) runs on the Vector Path (microcode!) - and it's not because AMD engineers thought people wouldn't use COMISS.

Basically, I don't think orthogonality between non-SIMD instructions and SIMD instructions is a good idea, since they have different goals: non-SIMD instructions have to have as few weird side effects as possible and retire as fast as possible, so that they can be renamed and reordered and jumbled and rolled-back if prediction has failed (or a load/store caused a page fault). SIMD instructions just have to do as much math as possible per cycle - they're all about throughput, so it doesn't matter if they take 4 or 5 cycles to complete, if they can't be reordered and so forth - which is why VLIW is popular in cpus designed for DSP (they don't have to run C++!).

SIMD-oriented code also tends to be more likely to be simple short loops so I don't think it really has to be particularly compact. Also the memory bandwidth usage for instructions will probably be totally dwarfed by the memory bandwidth usage for data in SIMD code anyways.

-

I don't think saving/restoring the register file on task switch using 32 consecutive loads/stores (+sp update) is THAT much of a problem because task switches cause other much slower side effects - for instance, you're likely to get a whole bunch of instruction cache misses and data cache misses and TLB evictions and TLB misses and branch prediction misses and the cache prefetcher getting confused - those are many times more costly.

To handle interrupts, you do need a few scratchpad registers that are only accessible to the OS, for saving the previous values of SP + IP + OS/user mode + a couple extra registers. This is to get work space to "bootstrap" the interrupt handler state save/restore. Early MIPS had the problem that it didn't really have those system-reserved special registers, so you unfortunately lost a couple of general purpose registers instead.

You also probably need the hardware TLB to have different memory mappings for OS and user and switch automatically between those. Another way to deal with this is having a few banked registers (typically SP and the last few registers - just enough to initiate state saving). Even though this makes interrupt handler prologues kinda ugly, it also removes the need for microcoded interrupt handlers (which are often somewhat bypassed by the OS anyways).


Author: Agner Date: 2016-04-01 03:49
Thank you for your comments. It is nice to have sparring partners to discuss with. We are really getting somewhere.

I think there are many advantages to storing the vector length in the vector register itself. This makes it much cheaper to save callee-save registers: You only have to save the part of the register that is actually used. I plan to make special instructions for save/restore. These instructions will save or restore the length and as much of the data as is actually used. The save instruction can increment a pointer by the actual amount of space used. The restore instruction will have to be followed by an extra instruction to increment or decrement a pointer by the amount of space used. The same instruction can be used to adjust the stack pointer before saving. This will save a lot of data cache. If a very long vector register actually contains only a floating point scalar then we need not save any more than that. If the vector register is unused, which will probably happen more than 50% of the time, then we only need to save the length, which is zero. Also, we don't need a complicated lazy save mechanism for task switches. And we get one less input dependence because we don't need a separate register for vector length.

The disadvantage is, of course, that the compiler needs to distinguish between saving the data of a vector and saving everything for a later complete restore. Once we have this distinction, however, there is little extra cost to also saving a few carry flags. You want to keep the carry bits separate from the control flags. I agree. My idea is to use the carry bits for overflow detection (so that we can avoid interrupts in vector instructions), and use a flags register, as already present in my initial proposal, for masking and for a lot of option bits that we don't have room for in the instruction code. A few of these option bits will determine the use of the carry flag, i.e. whether it is unused or it detects signed or unsigned integer overflow, floating point overflow or invalid operations or both, and whether it is propagated from input vector operands to output operands. All of these features are only activated if the software actually has a try/catch clause to detect overflows. Otherwise, the compiler doesn't need to care. But you want to avoid instructions with two outputs, and this is the best solution to that problem I can think of :-) We can separate the carry/overflow bits from the control flags without generating any extra output dependence.

We can have an instruction to extract the carry bits to an integer bitfield. This will make it possible to do a fast carry-lookahead for a BIGNUM addition in a large vector register with just a few integer instructions.

I agree that integer registers should not have these extra bits.

The idea of having separate registers for system code only, sounds good. Would eight 64-bit registers be a suitable number?

I have fiddled a little with the instruction format. The coding gets simpler when we get rid of the vector length register and a few option bits. I think this makes it possible to make the instructions shorter so that we can avoid triple-size instructions. Of course, we will have to use two instructions for loading a 64-bit immediate value then, but I think we can live with that.


Author: Joe Duarte Date: 2016-04-02 02:09
Hubert and Agner, I have a related but slightly tangential question, since we're talking about TLBs.

In our applications, within a virtual memory space, why do we still use conventional -- and enormous -- memory addresses?

We use 64-bit (or 48 actual I think on x64). Since it's a program's exclusive virtual memory space, a universe of our own, why can't we use arbitrary and much, much smaller addresses? Like 1, 2, 3, etc., basically a byte or two. If the TLB is going to have to translate anyway, is it a problem to translate small addresses into conventional physical ones? Processes could have unique IDs - one byte would be ample most of the time - so the TLB would have a unique identifier for all addresses (process ID + the process' internal small memory addresses), and it would often take only two or three bytes total.

You could do a variety of things with pages and their sizes in that scenario. What am I missing here? I usually stick to managed languages.

Separately, I think it would be fruitful to think about how an ISA could be designed to help JITs, and common parsing workloads like the web. If we designed an ISA to increase the performance of browser JS engines,. NET and Java VMs, etc. what would that look like? Would it be instruction selection that mattered most, or other aspects of the architecture? Intel rolled out some great instructions for string parsing in SSE 4.2, but my impression is that hardly any developers know about them or use then (I couldn't find them in nginx or Apache source, or browsers I checked, and they could really benefit.) That raises a separate issue in getting developers to pay attention to ISAs and optimization wins...


Author: Agner Date: 2016-04-02 03:03
Joe Duarte wrote:
why do we still use conventional -- and enormous -- memory addresses?

We use 64-bit (or 48 actual I think on x64). Since it's a program's exclusive virtual memory space, a universe of our own, why can't we use arbitrary and much, much smaller addresses? Like 1, 2, 3, etc., basically a byte or two.
Most operating systems have now switched to 64-bit addresses. It is true that most applications can do with a private 32-bit address space, but not all. A video editing program, for example, may need more than 4 gigabytes of data, and future needs may be still more. Better have 64-bit addresses to fit future needs than using complicated memory bank swapping and the like.
Separately, I think it would be fruitful to think about how an ISA could be designed to help JITs, and common parsing workloads like the web. If we designed an ISA to increase the performance of browser JS engines,. NET and Java VMs, etc. what would that look like? Would it be instruction selection that mattered most, or other aspects of the architecture?
I don't know if a JIT compiler needs anything special. Maybe string compare, but we can do that with vectors of 8-bit elements (this will work with UTF-8 strings). Anything else you have in mind for JIT compilers?

My proposal includes variable-length vector registers that enable the software to adapt automatically to the different vector sizes of different processors without recompiling. If one compiled executable file fits all variants of the processor, why do we need JIT compilers at all?

Interpreters for byte code languages have a lot of jump tables or tables of function pointers. My proposal has an instruction for efficient handling of jump/call tables of 32-bit pointers relative to an arbitrary reference point.
Intel rolled out some great instructions for string parsing in SSE 4.2, but my impression is that hardly any developers know about them or use then
The SSE4.2 instructions are ingenious, but very complicated for programmers to use. Most text strings intended for human reading are not so long that the speed of text processing really matters. The SSE4.2 instructions may be useful for other purposes, e.g. DNA sequence analysis.


Author: Joe Duarte Date: 2016-04-02 17:09
Agner wrote:
Most operating systems have now switched to 64-bit addresses. It is true that most applications can do with a private 32-bit address space, but not all. A video editing program, for example, may need more than 4 gigabytes of data, and future needs may be still more. Better have 64-bit addresses to fit future needs than using complicated memory bank swapping and the like.
Am I correct in recalling that x86-64 doesn't actually expose a 64-bit address space, but rather a 48-bit one? See http://stackoverflow.com/questions/6716 ... ress-space

However, this doesn't matter for my purposes. I'm asking why, in virtual memory, we mirror the physical memory addressing scheme. Why does a process use an address like 0x7fffd1510060 when it could use and address like 1 or 2 or D4? It's the process' exclusive virtual memory space – wouldn't it save a lot of memory if it could use one-byte pointers? I image that the TLB or MMU can translate these virtual addresses just as easily as it translates 0x7fffd1510060.

I've also wondered if we could use time stamps as virtual memory addresses – start the clock at zero nanoseconds for each process, every allocation marked by the nanoseconds transpired since the start of the process, each process having its own little Unix epoch if you will. This would also be more compact than the status quo, given some simple truncation and compression techniques. Time-stamped allocations might also be useful for a capabilities-based system, like the CHERI ISA at Cambridge or the Barrelfish OS that Microsoft and ETH Zurich have worked on.

Agner wrote:
I don't know if a JIT compiler needs anything special. Maybe string compare, but we can do that with vectors of 8-bit elements (this will work with UTF-8 strings). Anything else you have in mind for JIT compilers?
Parsing, parsing, and more parsing (and lexing). I'm not sure that processor and ISA designers have thoroughly explored how parsing performance might be improved. And of course the actual compilation to machine code. JITs have to make hard trade-offs with respect to generating maximally optimized code vs. generating code quickly. They have to forsake some of the optimizations that a static C/C++ compiler would provide. (Relatedly, you might find this interesting. Apple recently added LLVM as last stage compiler for their WebKit/Safari JavaScript JIT, but more recently replaced it with a completely new compiler. Very interesting deep dive here: https://webkit.org/blog/5852/introducin ... -compiler/)

The other big thing JITs and many non-JIT runtimes have to do is garbage collection. I think it's well worth thinking about how an ISA could be designed to optimize garbage collection. There are some papers out there on hardware-accelerated garbage collection, but I haven't seen anyone model how an ISA's design decisions could help (or hurt) garbage collection.

Agner wrote:
My proposal includes variable-length vector registers that enable the software to adapt automatically to the different vector sizes of different processors without recompiling. If one compiled executable file fits all variants of the processor, why do we need JIT compilers at all?
We need them for the web, for JavaScript. This will be the case for many years to come. And for REPL and interpreter-like execution environments. Julia comes to mind the most.

WebAssembly is rolling out later this year. It looks excellent and thankfully everyone is behind it: Microsoft, Mozilla, Google, and perhaps Apple. It will be a partly compiled bytecode that browsers can execute much, much faster than JavaScript. However it won't replace JavaScript, as it's meant more for games, multimedia, and compute-intensive workloads. For now, the only source languages supported are C and C++, but more are expected. https://github.com/WebAssembly/design

Now that I think about it, you really ought to be involved in that project. They would benefit from your input. Relatedly, the web standards authorities and browser makers have been working on SIMD.JS, which I think would also benefit from your insights. I'm surprised they haven't asked for your help (if in fact they haven't). https://developer.mozilla.org/en-US/doc ... jects/SIMD

Agner wrote:
The SSE4.2 instructions are ingenious, but very complicated for programmers to use. Most text strings intended for human reading are not so long that the speed of text processing really matters. The SSE4.2 instructions may be useful for other purposes, e.g. DNA sequence analysis.
I don't think text length is the issue. The new instructions are mostly designed to parse XML (and would work just as well for parsing any kind of structured text, HTML, even bytecode depending on some particulars.) From one of the Intel papers:
XML documents are made up of storage units called entities, like Character Data, Element, Comment, CDATA Section, etc. Each type of entity has its own well-formed definition that is a serious of character range rules.[1] The main work of Intel XML parsing is to recognize these entities and their logic structures.

From Intel XML Parsing Accelerator, we found that character checking loop occupies more than 60% CPU cycles of the whole parsing process, depending on the property of benchmark. There are two kinds of important behavior in this loop, read bytes and check whether it is legal for its corresponding entity type. Without any parallel instructions for string comparison, this process must be implemented in serializing mode.
(From https://software.intel.com/en-us/articl ... intel-sse4)

I think the instructions would be useful for superfast user agent detection in web servers. I think PCMPESTRI and the other instructions work with 16-byte strings, and you could probably take a 16-byte chunk of a certain area of the user agent string that would uniquely identify the key factors you cared about across all user agents, like mobile or not, specific browser and version (which could, for example, tell you if you could use the vectors in SIMD.JS because you'd know which browsers support it.) The web is too slow, and I think common web servers, applications, and databases would be much faster if they used modern CPU instructions in their code.

(Example user agent string: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.110 Safari/537.36)

Cheers,

Joe D.


Author: Agner Date: 2016-04-02 04:26
I have found a way to do addition of very large integers without the need for special carry bits:
  • Put the two big numbers into vector registers A and B, or as much of the numbers that will fit into the maximum vector length
  • Calculate the sums of all 64-bit vector elements: C = A + B
  • Make a vector of the carries, one bit in each 64-bit element: D = (C < A)
  • Find elements with all ones: E = (C == -1)
  • Combine bit 0 of each element of C into an integer bitfield: D1 = convert_boolean_vector_to_bitfield(D)
  • Do the same with E: E1 = convert_boolean_vector_to_bitfield(E)
  • Use integer operations to do the carry look-ahead. D1 is the elements that generate carry, E1 is the elements that propagate carry. F = E1 xor (E1 + 2D1 + carry_in)
  • Convert back from bitfield to vector: F1 = convert_bitfield_to_boolean_vector(F)
  • Add the propagated carries: Sum = C + F1
  • Carry out if more rounds needed: Carry_out = F >> number_of_vector_elements
If we don't need extra carry bits for high precision arithmetics, then the only need for these extra bits is for detecting and propagating integer overflow.

If we don't have the extra carry flags, then we will need special instructions for checking if an addition, multiplication, etc. overflows. After each addition or other arithmetic instruction, issue another instruction with the same inputs just to check if the operation overflows. The outputs of all the overflow checks for a series of calculations should then be OR'ed together. The total number of instructions will be three times the number of instructions needed without overflow check.

So what are the costs and benefits? Without the extra flag bits we need three times as many instructions if we want to check for integer overflow. We can avoid overflow in many cases by using 64-bit integers, but even 64-bit integers can easily overflow if the calculation has many multiplications. Compiler support for checking integer overflow is rare, unfortunately, so any mechanism for detecting integer overflow will perhaps not be used much. On the other hand, if an efficient method was available, we would probably see compilers that support it and programmers that use it. In fact, many programmers are frustrated over how difficult it is to detect signed integer overflow. Traps for integer overflow exception is not good for vector code. One solution is to use floating point calculations instead of integer, but I don't see that solution used much. Floating point calculations have longer latencies, of course. No solution seems to be really good here.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Do we need instructions with two outputs?

Post by agner »

Author: Hubert Lamontagne Date: 2016-04-02 14:13
Agner wrote:
Thank you for your comments. It is nice to have sparring partners to discuss with. We are really getting somewhere.
Ha yeah. "Sparring partner" is a great way to put it. :3
I think there are many advantages to storing the vector length in the vector register itself. This makes it much cheaper to save callee-save registers: You only have to save the part of the register that is actually used. I plan to make special instructions for save/restore. These instructions will save or restore the length and as much of the data as is actually used. The save instruction can increment a pointer by the actual amount of space used. The restore instruction will have to be followed by an extra instruction to increment or decrement a pointer by the amount of space used. The same instruction can be used to adjust the stack pointer before saving. This will save a lot of data cache. If a very long vector register actually contains only a floating point scalar then we need not save any more than that. If the vector register is unused, which will probably happen more than 50% of the time, then we only need to save the length, which is zero. Also, we don't need a complicated lazy save mechanism for task switches. And we get one less input dependence because we don't need a separate register for vector length.
One catch is that vector registers need special alignment. For instance,
if your vector regs are 512bits and your DCache width is 512bits, you
want your vectors to be 512bit aligned when saved to the stack, so you
need to 512-align your stack pointer. Also, as the vector register size
increases, the stack size allocated to applications has to grow because
an app that was tested and known to work with 128bit vector saves/restores
might cause stack overflows if vectors become 512bits!

In light of this, I'd suggest:
- Scalar floats should get their own 32/64bit registers. In some C++
programs (games, audio, etc) they see lots of use and are often passed
to/from functions, so they need to be easy to save/restore. Since they have
the same 4/8byte alignment as 32/64bit ints, this is very easy to do if
you mandate an 8byte aligned SP in the calling convention.

- In fact, I'd even suggest a second, different stack for vector register
saves/restore. This way, the CPU can keep the VSP (vector stack pointer)
in perfect alignment all the time without having to do dynamic realignment
on the regular SP, and it can have special instructions that adjust the VSP
the right way for each vector, and the OS can grow the vector stack to make
sure that a program that expects 128-bit vector alignment will never generate
stack overflows when alignment grows to 256-bit+ etc...

I see the whole sequence for saving multiple vectors going something like this:

Code: Select all

SUB sp, 12
; special instruction that gets the number of empty bytes to adjust vsp so that
; the next vector save is aligned and can be done in a single DCache cycle
GETVECTORSTOREPREPAD r0, vsp, v0.size
ST r0, [sp + 0]
ADD vsp, r0
ST v0, [vsp]
GETVECTORSTOREPREPAD r1, vsp, v1.size
ST r1, [sp + 4]
ADD vsp, r1
ST v1, [vsp]
GETVECTORSTOREPREPAD r2, vsp, v2.size
ST r2, [sp + 8]
ADD vsp, r2
ST v2, [vsp]
The whole sequence can probably be streamlined somwhat but I
hope you get what I mean here.

Code: Select all

The disadvantage is, of course, that the compiler needs to distinguish between saving the data of a vector and saving everything for a later complete restore. Once we have this distinction, however, there is little extra cost to also saving a few carry flags. 
There's an extra cost, it's just in a different non-obvious place:
it forces the compiler to figure out if the carry bits are relevant
for each operation in a chain, and if the compiler can't figure it
out it will output less efficient. Whereas if carry flags are in their
own register, and only written/read by some operations (on ARM,
when SUB generates flags it is called SUBS and is a different
instruction), then the compiler only ever has to worry about carry
flags for instructions that expressly read/write them (ADDS / ADC /
ADCS / SUBS / SBC / SBCS / CMPS etc), and then it just
becomes one extra register pool in the compiler's register allocator.

The use of separate flag registers that only get written to by a limited
subset of operations is also good for CPUs, because then it needs
a much, much less aggressive unit for handling flag register writes/reads
since it only needs to handle 1 write/read per cycle, instead of like
3/4+ like on x86.
The idea of having separate registers for system code only, sounds good. Would eight 64-bit registers be a suitable number?
8 registers for general purpose OS usage sounds good. It can probably
be all combined in an instruction for reading/writing CPU control registers,
like the CR0..CR15, DR0..DR15, MXCSR, MSW, GDTR, IDTR, LDTR, TR,
CS, SS, DS, ES, FS, GS, and floating point control registers on x64.

Some system operations could also be implemented as reads/writes to system
registers: CPUID could be replaced by a bunch of read-only system registers
that give the CPU model for instance.

Having lots of potential system registers available would probably help with
limiting the need to add new system opcodes as new features are added.

Joe Duarte wrote:
Hubert and Agner, I have a related but slightly tangential question, since we're talking about TLBs.

In our applications, within a virtual memory space, why do we still use conventional -- and enormous -- memory addresses?

We use 64-bit (or 48 actual I think on x64). Since it's a program's exclusive virtual memory space, a universe of our own, why can't we use arbitrary and much, much smaller addresses? Like 1, 2, 3, etc., basically a byte or two. If the TLB is going to have to translate anyway, is it a problem to translate small addresses into conventional physical ones? Processes could have unique IDs - one byte would be ample most of the time - so the TLB would have a unique identifier for all addresses (process ID + the process' internal small memory addresses), and it would often take only two or three bytes total.

You could do a variety of things with pages and their sizes in that scenario. What am I missing here? I usually stick to managed languages.
ARM and x86 handle this by having a 32-bit mode. :3

You could definitively argue for going for a 32-bit
architecture with a standard 64-bit extension like what
Risc-V does. This makes total sense for an architecture
that's going to be used in embedded devices.

Even MIPS falls in this case: ultimately, it was used in way
more PS2's, set-top boxes and so forth than in servers.
Because of this, its 32bit support was a good thing
(and you can even argue that the infamous branch-delay-slot
was actually GOOD for MIPS).

----

Anyhow, the issue here I guess is that if your app has pointers
smaller than 4-bytes, well:

- 3-bytes can't be aligned to cache lines. If you have an array
of pointers, eventually one of the pointers has to straddle
2 cache lines and that's a much larger penalty than some
wasted DCache due to using 4-byte pointers.

- 3-bytes is only a 16 Mb address space, and few languages
want to be limited to something that small. You cannot start
with 3-byte pointer program and then dynamically upgrade
everything to 4-bytes if you ever run out of space. Might as
well make everything 4-byte from the start.

- 2-bytes is only 64 kb. Sure, you could have 2-byte pointers,
then have all sorts of data zone selectors to grow out of that:
this is exactly how 16-bit x86 works, and programs just totally
outgrew this. Even the late generation of DOS games are way
too large for this and do everything in 32bits.

Anyhow, this is also why the TLB is 2 levels on 32bit x86: a lot
of apps are only going to use a few hundred kilobytes of ram,
so it makes sense to only have page tables for small sections
of ram. On the top level page table, all the unused 4mb
blocks-of-pages are simply marked as invalid, and extra page
tables are allocated as the program's memory usage grows.
Separately, I think it would be fruitful to think about how an ISA could be designed to help JITs, and common parsing workloads like the web. If we designed an ISA to increase the performance of browser JS engines,. NET and Java VMs, etc. what would that look like? Would it be instruction selection that mattered most, or other aspects of the architecture? Intel rolled out some great instructions for string parsing in SSE 4.2, but my impression is that hardly any developers know about them or use then (I couldn't find them in nginx or Apache source, or browsers I checked, and they could really benefit.) That raises a separate issue in getting developers to pay attention to ISAs and optimization wins...
Unfortunately, this kind of higher-level stuff typically has all
sorts of special cases and exceptions that make them
impossible to implement in fast circuits.

For instance, JS interpreters often have the trick of having
numbers as 64bit floats, and other types as an invalid
64bit float with a pointer in it. Well, now instead of just
adding two floats together like in C++, you first have to decide
whether one of your floats is actually a string and you have
to trigger string to double conversion.

Or worse even, in some languages it can be some type with
an overloaded conversion to double, and the conversion code
can call all sorts of functions generating all sorts of side effects
(like printing to the console), which means that the compiler
can't even change the order of two additions to make things
faster because it can introduce new bugs.

String parsing also falls in this kind of case. For instance,
checking the length of a string is not so obvious: often the
interpreter knows the number of bytes of a string, but some of
those bytes are UTF-8 characters, and if you account for special
cases (like ending up with Latin-1 aka CP-1252 text), then there's
often really no alternative to just looping through the string byte
per byte, in which case it's impossible to be faster than x86 / ARM /
MIPS even though those chips have no special support for this.

This is also why so few higher level languages support multiple
threads / multi-processor : there's just so many weird states that
the interpreter can be in, and there are just so many weird possible
side-effects, that you can't let a second core go through that data -
it would just create all sorts of nasty race conditions and so forth.

-----

Java and .NET are in a lesser class of badness, in that a Java
float is just a float and you can add it normally, and you can easily
run Java multi-threaded (and it has a nice way off adding mutexes
to objects). But then again, Java benefits way less from special
architectures - if your Java program can't run fast on x86, then it
can't run fast on anything!

The real cost of Java is the Garbage Collector, which makes it
impossible to avoid garbage-collection pauses, which are often
longer than 100ms, or even occasionally longer than 300ms.
This is perfectly fine for server software (and Java is an enterprise
language so it fits this perfectly), but it makes Java unsuitable for
software that can't really have pauses like games and heavy-duty
user-facing GUI software (Photoshop, C++ IDEs, Word, iPhone
apps etc). This cannot be solved by processor design.


Agner wrote:
If we don't need extra carry bits for high precision arithmetics, then the only need for these extra bits is for detecting and propagating integer overflow.

If we don't have the extra carry flags, then we will need special instructions for checking if an addition, multiplication, etc. overflows. After each addition or other arithmetic instruction, issue another instruction with the same inputs just to check if the operation overflows. The outputs of all the overflow checks for a series of calculations should then be OR'ed together. The total number of instructions will be three times the number of instructions needed without overflow check.

So what are the costs and benefits? Without the extra flag bits we need three times as many instructions if we want to check for integer overflow. We can avoid overflow in many cases by using 64-bit integers, but even 64-bit integers can easily overflow if the calculation has many multiplications. Compiler support for checking integer overflow is rare, unfortunately, so any mechanism for detecting integer overflow will perhaps not be used much. On the other hand, if an efficient method was available, we would probably see compilers that support it and programmers that use it. In fact, many programmers are frustrated over how difficult it is to detect signed integer overflow. Traps for integer overflow exception is not good for vector code. One solution is to use floating point calculations instead of integer, but I don't see that solution used much. Floating point calculations have longer latencies, of course. No solution seems to be really good here.
Dunno, I write C++ sound code, and for code with lots
of multiplications that could overflow, I use floating point
code _all the time_. :3

Floating point does have higher latency, but that's true of
any code that uses lots of multiplication. Float multiplication
also has the really nice property that you need a lot less
shifts and clamps than fixed point code, so for algos with tons
of multiplications (say, FFT for instance), floating-point
is really really useful.

If I need to do multiplications in integer code (for instance,
in a FM synthesizer - can't use float because converting
float to int to use as array indexes is slow), then what I
generally do is that I just make sure my * operands are
essentially 16-bits (or, say, one operand is 24-bits and the
other is 8-bits), and carefully use lots of >> bitshifts to keep
everything in range. For sound generation, this is generally
fine because the human ear isn't that 'precise' anyways
(in terms of bits).

Another special case was writing ARM NEON code, which had
lackluster floating-point and only a 16x16 multiplier in the hardware
(32x32 multiplies had a significant penalty). So I used a lot of a
particular opcode called VQDMULH.s16 - 16x16->16 vector
signed doubling multiply keeping the top part of the result and
clamping the special case of -32768* -32768 to 32767 - equivalent to this:

Code: Select all

res = (int16_t)a * (int16_t)b;
res = res >> 15;
if(res == 32768)
res = 32767; // clamp the particular case of -32768 x -32768
(int16_t)out = res
That being said, I have had to use 32x32->64 wide multiplication
in code 2 or 3 times. But I think it's a special case, compared to
the 32bit float multiplication, which I use all over the place!


Author: Agner Date: 2016-04-03 13:49
Hubert Lamontagne wrote
One catch is that vector registers need special alignment. For instance, if your vector regs are 512bits and your DCache width is 512bits, you want your vectors to be 512bit aligned when saved to the stack, so you need to 512-align your stack pointer.
The code has to be compatible with different processors with different vector sizes, so we don't want the required alignment to depend on the processor. A separate stack for vectors is a possible solution, but very wasteful. Cache space is a limiting resource so I don't want to save a possibly very long vector when only a small part of it is used. This is why I think it is smart to save the vector length in the vector register itself. A save instruction will save only as much as is needed. In the caller-save situation, the function knows how much of the register is saved, so it can use a normal store instruction. In the callee-save situation, the function will rely on the register length information and use the special save instruction to save only what is needed. This situation will be rare anyway, because the 16 caller-save registers will be enough for must purposes.

Separate registers for floating point scalars would be useful if we had to save the full vector in a callee-save situation, but the length information eliminates this need.

I think we can live with 8-bytes alignment of vectors. The hardware can handle this with a simple barrel shifter, but it may have to load an extra cache line, of course. Large arrays should be aligned by the cache line size for optimum performance.

I think the format for saving vector registers with the special save instruction should be implementation dependent. It may use a single byte for the length if the length cannot exceed 128 bytes, or it may use more for longer vectors or for the sake of alignment. It may even compress the data if this can be done fast enough. For example, a boolean mask vector using only one bit of each 64-bit element can obviously be compressed a lot. The format will be padded to fit whatever alignment is optimal on the particular processor. The software should use data in the special "save format" for no other purpose than to restore the register on the same processor.

It is a disadvantage that the saved format may be longer than the maximum vector length when it includes the length information. But I think this is outweighed by the advantage that most saved registers will use less space. Many registers will be unused and store only a zero for the length.
There's an extra cost, it's just in a different non-obvious place: it forces the compiler to figure out if the carry bits are relevant for each operation in a chain, and if the compiler can't figure it out it will output less efficient. Whereas if carry flags are in their own register, and only written/read by some operations (on ARM, when SUB generates flags it is called SUBS and is a different instruction), then the compiler only ever has to worry about carry flags for instructions that expressly read/write them (ADDS / ADC / ADCS / SUBS / SBC / SBCS / CMPS etc), and then it just becomes one extra register pool in the compiler's register allocator.
As I wrote, I have found an alternative solution for add with carry. We only have to consider whether we need an efficient way of tracking integer overflow.
CPUID could be replaced by a bunch of read-only system registers that give the CPU model for instance.
Good idea!

Joe Duarte wrote:
Since it's a program's exclusive virtual memory space, a universe of our own, why can't we use arbitrary and much, much smaller addresses?
My priority is the performance of big systems. That's why I have 64-bit address space. All addresses are relative to some pointer (instruction pointer, data section pointer, stack pointer, or an arbitrary pointer) with a signed offset of 8 bits or 32 bits. The instruction size will not be reduced by having a smaller address space, but of course we could save some stack space by having a 32-bit mode. I don't like having two different modes, though. Then we would have problems with stack alignment for doubles, etc. Byte code languages can have their own smaller address space of course.
String parsing also falls in this kind of case. For instance, checking the length of a string is not so obvious: often the interpreter knows the number of bytes of a string, but some of those bytes are UTF-8 characters, and if you account for special cases (like ending up with Latin-1 aka CP-1252 text), then there's often really no alternative to just looping through the string byte per byte
I think we should use UTF-8 only. It is possible to search for a terminating zero by loading a full vector and compare all bytes in the vector with zero. My ABI requires a little extra space at the end of user memory to avoid access violation when reading past the end of a string that happens to be placed at the very end of user memory. But of course it is more efficient to save the length of the string.
I write C++ sound code, and for code with lots of multiplications that could overflow, I use floating point code _all the time_. :3
Hardware multipliers are expensive, and divisors are even more expensive. I wonder if we need to support multiplication and division of all operand sizes, including vectors of 8-bit and 16-bit integers, if programmers are using floating point anyway?


Author: Joe Duarte Date: 2016-04-03 16:51
Joe Duarte wrote:
Since it's a program's exclusive virtual memory space,
a universe of our own, why can't we use arbitrary and
much, much smaller addresses?
Agner replied:
My priority is the performance of big systems. That's
why I have 64-bit address space. All addresses are
relative to some pointer (instruction pointer, data
section pointer, stack pointer, or an arbitrary
pointer) with a signed offset of 8 bits or 32 bits.
The instruction size will not be reduced by having a
smaller address space, but of course we could save
some stack space by having a 32-bit mode. I don't like
having two different modes, though. Then we would have
problems with stack alignment for doubles, etc. Byte
code languages can have their own smaller address
space of course.
I just don't like all the waste with 64-bit types and pointers. It becomes less of an issue over time, but it still bothers me. Relatedly, I like what these Berkeley people did with Precimonious, a tool that tunes floating point precision and eliminates some of the waste: www.eecs.berkeley.edu/~rubio/includes/sc13.pdf

Question: Does an ISA really need to specify the number of architectural registers? What would the implications be of not doing so, and having an infinite number of architectural registers like LLVM's IR? It seems like the number of registers is a fiction anyway (I was stunned to discover that x86-64 processors from Intel and AMD have nearly 200 physical registers.) This would make the number of registers implementation-dependent, rather than part of the ISA specification. See Vikram Adve's recent talk at Microsoft: http://research.microsoft.com/apps/vide ... ?id=249344 (The Microsoft Research people must have been having a bad day or something – their questions and comments reveal that they thoroughly misunderstood his ideas.)

Question 2: Let's assume that we have registers R0 - R31. Might it be useful to also have an unspecified register – call it Rx – that basically tells the CPU "give me whatever register you have – I don't care which one". I can imagine some scenarios where this might be useful for a compiler. And it seems to fit with the reality of register renaming anyway.


Author: Hubert Lamontagne Date: 2016-04-03 19:00
Suppose the SP is at 0x02018, and the L1 cache lines are 64 bytes in size, and you want to save a vector that's, say, 24 bytes long (6*32bit floats). Then, you need to first save the control word that tells you the size, compression etc of the vector. Fair enough, the vector data goes to 0x01FE8..0x02017. And then you have to save the vector size control word, which puts you at 0x01FE4 if you assume 32bits... but this doesn't work because then your SP is not 8-byte aligned anymore for 64bit integer and floating-point value. So you must save the vector size word to 0x01FE0 instead, with some extra padding (and the CPU either stores the amount of padding in the vector size word, or recalculates the amount of padding from SP alignment and vector size when reloading the vector).

Another possibility is that you could add some post-padding, so that the vector line is saved to 0x01FD8..0x01FFF and the control word goes to 0x01FD0, so that the whole thing fits in a single cache line. The amount of post-padding must be saved in the vector size control word.

Yeah, it's doable. But it's a long multicycle instruction, probably microcoded - after all, it writes an unpredictable amount of bytes to unpredictable offsets, often spanning 2 different cache lines, and updates the SP, and involves multiple address calculations to figure out just how much pre-padding and post-padding you need to do to keep your stack and your data well aligned. And it's very likely to completely block memory operation reordering (ie act like a memory barrier) because it's too difficult for concurrent memory operations to figure out whether they will overlap or not.

Agner wrote:
Hardware multipliers are expensive, and divisors are even more expensive. I wonder if we need to support multiplication and division of all operand sizes, including vectors of 8-bit and 16-bit integers, if programmers are using floating point anyway?
Generally, 8-bit and 16-bit vector multiplications are provided in SIMD instruction sets to do stuff like movie decoding and software rendering (when OpenGL/DirectX are unavailable due to software constraints, such as running as a plugin). For scalars, 32*32->32 multiplies cover everything (and are common in C++ code), but some CPUs also provide 16*16->32 multiplies because they run faster (ARM).


Author: Agner Date: 2016-04-04 08:50
Joe Duarte wrote:
Does an ISA really need to specify the number of architectural registers? What would the implications be of not doing so, and having an infinite number of architectural registers like LLVM's IR?
What do you mean? If we have 1023 virtual registers like LLVM then we need 10 bits in the opcode for each register. Or do you want a rolling register stack? Then we have a problem when the register stack overflows. That would be quite wasteful if the overflow happens in the innermost loop.
Might it be useful to also have an unspecified register – call it Rx – that basically tells the CPU "give me whatever register you have – I don't care which one".
All of the registers behave like that in a superscalar processor. You ask for a particular architectural register and you get a random physical register - you don't even know which one you have got.

Hubert Lamontagne wrote:
then your SP is not 8-byte aligned anymore for 64bit integer and floating-point value. So you must save the vector size word to 0x01FE0 instead, with some extra padding (and the CPU either stores the amount of padding in the vector size word, or recalculates the amount of padding from SP alignment and vector size when reloading the vector).
Yes, this is a complication. I want to handle it in software without any complex instructions. If the size of the saved register image is not guaranteed to be a multiple of the stack word size then I would first calculate the amount of space needed for all the vector registers I want to save, then save the stack pointer to another register, then subtract the necessary size from the stack pointer, then align the stack by 8 ( AND SP,-8 ), then use a temporary register as pointer to the save area, and then save the registers, incrementing the pointer each time. The restore process is easier. There is no problem with saving registers during a task switch because you have a pre-allocated space that is big enough to cover all cases.

The alternative is to give the saved image of each register a size that is a multiple of the stack word size. This will make it easier to spill registers on the stack at the cost of using more space. It will make it simpler for the compiler and also easier for the hardware because of the better alignment. The cost is that it uses more cache space, which is probably more expensive than the extra instructions. If all vector registers are unused, then we will need only one or two bytes for saving each with the first method, versus eight bytes with the second method.

It is difficult to weigh the costs/benefits of these two solutions against each other, but you are right that the first method is very complicated.
Generally, 8-bit and 16-bit vector multiplications are provided in SIMD instruction sets to do stuff like movie decoding and software rendering
x86 does not have a complete set of vector multiply instructions. NEON has 8, 16, and 32 bits. I don't see what you would you need 8-bit vector multiply for?


Author: Hubert Lamontagne Date: 2016-04-04 21:01
Joe Duarte wrote:
Question: Does an ISA really need to specify the number of architectural registers? What would the implications be of not doing so, and having an infinite number of architectural registers like LLVM's IR? It seems like the number of registers is a fiction anyway (I was stunned to discover that x86-64 processors from Intel and AMD have nearly 200 physical registers.) This would make the number of registers implementation-dependent, rather than part of the ISA specification. See Vikram Adve's recent talk at Microsoft: http://research.microsoft.com/apps/vide ... ?id=249344 (The Microsoft Research people must have been having a bad day or something – their questions and comments reveal that they thoroughly misunderstood his ideas.)
This has a cost:
- You can make instructions variable-sized to accomodate different numbers of registers, but this increases branch mispredict penalty and makes it hard to run multiple instructions at the same time.
- What if your cpu has 32 registers and the program uses the 33rd? You can spill values to memory, but then the CPU has to figure out that it doesn't conflict with any other memory reads/writes.
- More registers = instructions become larger. MIPS instructions would take 42bits instead of 32bits if you had 1024 registers instead of 32.
- Larger register files are slower, which reduces clock rate or causes more stalls due to more latency cycles required to get register values.
Question 2: Let's assume that we have registers R0 - R31. Might it be useful to also have an unspecified register – call it Rx – that basically tells the CPU "give me whatever register you have – I don't care which one". I can imagine some scenarios where this might be useful for a compiler. And it seems to fit with the reality of register renaming anyway.
Okay, you can write a result to "Rx", but how do you find which register that "Rx" is once you want to read back your result and do something with it? What if you write to multiple "Rx"'es, how do you keep track of what went where?

------------------------------------

Agner wrote:
Yes, this is a complication. I want to handle it in software without any complex instructions. If the size of the saved register image is not guaranteed to be a multiple of the stack word size then I would first calculate the amount of space needed for all the vector registers I want to save, then save the stack pointer to another register, then subtract the necessary size from the stack pointer, then align the stack by 8 ( AND SP,-8 ), then use a temporary register as pointer to the save area, and then save the registers, incrementing the pointer each time. The restore process is easier.
Oh, I see. You'd use some kind of massive "vector store/load (including size prefix byte)" instruction that's basically never aligned to save all the vectors, then reestablish stack alignment. And for C++ ABI, you'd force all caller functions to save all SIMD vectors and floats and doubles, and use the caller's knowledge of what's in the registers to do a simpler aligned non-variable-sized save (instead of a massive unaligned variable-sized save)... On paper it works, but for some reason I find that rather scary.... :3

For instance, if you have a function working on a bunch of scalar floats, and then it calls some sub-function (say, "sqrt()" or something like that), won't it have to spill every single float register that it's working on unto the stack (potentially on every iteration of a loop)?
Generally, 8-bit and 16-bit vector multiplications are provided in SIMD instruction sets to do stuff like movie decoding and software rendering
x86 does not have a complete set of vector multiply instructions. NEON has 8, 16, and 32 bits. I don't see what you would you need 8-bit vector multiply for?
I'm thinking of the case of 32bpp RGBA bilinear interpolation/texture mapping/rotozoom, along with alpha blending, for the cases where you can't use OpenGL (for instance in Macromedia Flash). That's a notorious CPU pipeline buster, because the algo eats up a ton of small multiplications.


Author: Joe Duarte Date: 2016-04-06 19:51
Hubert said:
- 3-bytes can't be aligned to cache lines. If you have an array of pointers, eventually one of the pointers has to straddle 2 cache lines and that's a much larger penalty than some wasted DCache due to using 4-byte pointers.
- 3-bytes is only a 16 Mb address space, and few languages want to be limited to something that small. You cannot start with 3-byte pointer program and then dynamically upgrade everything to 4-bytes if you ever run out of space. Might as well make everything 4-byte from the start.
You're assuming byte-addressable memory. I'm assuming that these pointers or references would point to memory objects of arbitrary size, determined by what the variable, object, or function needs. I don't see why a program can't just tag its objects and entities in a virtual memory space with clean and compact pointers (but without garbage collection – just virtual memory.) I feel like there's not enough *virtual* in virtual memory right now – we should be able to abstract more.
Agner wrote:
Joe Duarte wrote:
Does an ISA really need to specify the number of architectural registers? What would the implications be of not doing so, and having an infinite number of architectural registers like LLVM's IR?
What do you mean? If we have 1023 virtual registers like LLVM then we need 10 bits in the opcode for each register. Or do you want a rolling register stack? Then we have a problem when the register stack overflows. That would be quite wasteful if the overflow happens in the innermost loop.
I mean register assignment could be sorted out at install time, or what's commonly called Ahead of Time compilation. One way to think of it is that some of what an LLVM back end does could be postponed until the application is installed and the precise characteristics of CPU are known. If you look at the SPIR-V IR just released from Kronos, I think some of the optimization will happen at install. And I think the Mill CPU architecture does this as well – the code is partly compiled into a "specification" until it's installed and knows which version of the CPU the user has.

Adve is talking about something similar in his talk. Also see Nuzman et al's "Vapor SIMD: Auto-Vectorize Once, Run Everywhere": https://www.irisa.fr/alf/downloads/roho ... _CGO11.pdf

On the issue of a Rx register, I think it might be a useful abstraction in some cases. You said that CPUs do this already, that we don't know what register we're getting. Yet, we're still naming registers explicitly, and you find it useful to retain named registers in an ISA. There are benefits to have named architectural registers, and I think there would be benefits from anonymous register semantics. Hubert asked how we'd refer back to it. There would be rules about how such register semantics could be used, and how to manage them – they wouldn't be the same as the normal registers. There are few ways to go about it.

Author: Hubert Lamontagne Date: 2016-04-07 23:32
Joe Duarte wrote:
Hubert said:
- 3-bytes can't be aligned to cache lines. If you have an array of pointers, eventually one of the pointers has to straddle 2 cache lines and that's a much larger penalty than some wasted DCache due to using 4-byte pointers.
- 3-bytes is only a 16 Mb address space, and few languages want to be limited to something that small. You cannot start with 3-byte pointer program and then dynamically upgrade everything to 4-bytes if you ever run out of space. Might as well make everything 4-byte from the start.
You're assuming byte-addressable memory. I'm assuming that these pointers or references would point to memory objects of arbitrary size, determined by what the variable, object, or function needs. I don't see why a program can't just tag its objects and entities in a virtual memory space with clean and compact pointers (but without garbage collection – just virtual memory.) I feel like there's not enough *virtual* in virtual memory right now – we should be able to abstract more.
Okay, but how do you get the individual fields out of the object you've got the reference of? Then you need both the object's handle/tag/reference/id, and an offset from the start of the object data (traditionally in bytes), or at least some kind of field ID (but that adds yet another translation pass to get the real byte offset).

The other issue is that you're probably going to hammer the TLB a lot harder that way: you're making the TLB hold all your real pointers instead of just page remaps. Which means you'll probably need a bigger, more complex TLB.

Third, this doesn't play well with C++ specifically, because objects aren't necessarily housed within their own memory allocation, they can be contained inside a larger object, or inside an array, or on the stack. So you need to save not only the handle, but also the byte offset. This is essentially the same thing as the infamous FAR pointer from 16-bit x86 coding.

Joe Duarte wrote:
https://www.irisa.fr/alf/downloads/roho ... _CGO11.pdf

On the issue of a Rx register, I think it might be a useful abstraction in some cases. You said that CPUs do this already, that we don't know what register we're getting. Yet, we're still naming registers explicitly, and you find it useful to retain named registers in an ISA. There are benefits to have named architectural registers, and I think there would be benefits from anonymous register semantics. Hubert asked how we'd refer back to it. There would be rules about how such register semantics could be used, and how to manage them – they wouldn't be the same as the normal registers. There are few ways to go about it.
Then you've either got a register stack like the 8087 fpu or some ultra-low-power CPUs (GreenArrays chips, designed to run Forth), or a queue like in the Mill (which it calls the "belt") or the Itanium rotating register file, depending on if your system forgets the newest values after use or the oldest ones.


Author: HarryDev Date: 2016-04-08 05:46
Hi, related to the JIT discussion I think the proposed ISA needs specific instruction for memory copying, initialization and comparison. Especially, memory copying from small number of bytes to big is important in GC related scenarios. Often memory copy ends up being implemented as multiple complicated vector instructions, when an ISA instruction would be so much better (the CPU could then handle this in the most optimized way for it). Some research has been done on this where adding "cpblk", "initblk" instructions where evaluated and these showed great benefit for code size and speed. I would allow these instructions to have element size defined i.e. 1, 2, 4, 8, 16 bytes for example so one can initialize a "double" array quickly, perhaps even a zero out "zeroblk" instruction since this is used a lot in managed memory scenarios.


Author: Hubert Lamontagne Date: 2016-04-09 00:06
HarryDev wrote:
Hi, related to the JIT discussion I think the proposed ISA needs specific instruction for memory copying, initialization and comparison. Especially, memory copying from small number of bytes to big is important in GC related scenarios. Often memory copy ends up being implemented as multiple complicated vector instructions, when an ISA instruction would be so much better (the CPU could then handle this in the most optimized way for it). Some research has been done on this where adding "cpblk", "initblk" instructions where evaluated and these showed great benefit for code size and speed. I would allow these instructions to have element size defined i.e. 1, 2, 4, 8, 16 bytes for example so one can initialize a "double" array quickly, perhaps even a zero out "zeroblk" instruction since this is used a lot in managed memory scenarios.
That's why x86 has the 'string copy' etc instructions. And the speed gain depends on the CPU generation...:

- On 8086 / 80286 / 80386, loading instructions was slow (no instruction cache!), so they string instructions were the fastest indeed. Other CPUs of the time also have stuff like this: z80 has string copies too, as does 65816, and 68000 has move multiple.

- On 486 and Pentium, designers realized that non-microcoded instructions can be implemented to run a lot faster, but there was only so much space on chips so only few instructions could get the speedup, so string instructions were left out and actually ran slower... RISC CPUs don't have any string copy instructions either - it just doesn't fit in their design!

- On later out-of-order x86 CPUs, string copies are still microcoded so they're slow to start, but once started they do wide multibit copies. So the end result is similar to the multiple complicated vector instructions anyways (since CPUs still have to deal with alignment). On the latest Ivy bridge and Haswell, string copy instructions are now the same speed as software loops - but that's a new thing: on preceding architectures, string copy instructions are still slower... RISC architectures like ARM mostly went with vector instructions to fill this need.

So I'm not sure that string copy/zero/compare instructions are still relevant nowadays because memory bandwidth is not all that fast, and the kind of pointer updating that was costly on 8 and 16 bit CPUs is now free since it runs in ALUs in parallel while the data cache does its job. And having CPU string copy instructions doesn't remove the need for complex cacheline-aligned loops - it's really just moving that complexity from the software to the microcode. I think it would be better to look into garbage collection algorithms that don't do compaction and don't recopy everything (and an end to Java's "allocate humongous blocks of memory on startup" approach).
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

How about stack machine ISA?

Post by agner »

Author: A-11 Date: 2016-04-10 07:35
It should be too late.
but since I found a research today, I ask for it.

BOOST: Berkeley's Out-of-Order Stack Thingy
https://www.researchgate.net/publicatio ... ack_Thingy

Same as "register renaming" for register machine, these researcher propose "address renaming" for stack machine to implement out-of-order execution.
So I think there are no reasons to prevent stack machine for performance problem.
Since stack machine instruction sets never bother us with number of registers, I think stack machine's ISA is more extensible than register machine's.
Also, stack ISA is easer for compilers as UCSD p-System showed.
it should also hold true for Just In-time Compiling.


treating stack ISA as CISC architecure
Author: A-11 Date: 2016-04-14 02:55
Above BOOST architecture converts each one stack instruction to corresponding one RISC-like micro code.
Since stack ISA is more fine than even load-store RISC ISA which puts opcode field and operand field together in a instrcution, BOOST generates more RISC-like micro code, requires more massive monster renaming unit than even RISC architecture.
I think this is the major difficulty of BOOST, same as RISC requires more instruction throughput than CISC.
Since each entry of reservation station in a conventional CISC machine handles "macro code" which represents read-modify-write sequence and generates several RISC-like micro code, CISC machines beat RISC machines.
I think the source of CISC power is such a powerful RS entry, which is toward a tiny microprocessor same as EDGE.
Explicit_Data_Graph_Execution - Wikipedia
http://en.wikipedia.org/wiki/Explicit_D ... _Execution
This means more powerful macro code reduces more pressure on a renaming unit, thus exploits more performance.
For example, read-modify-read-modify-read-write macro ops such as below line is preferred to read-modify-write ops.
read [sp + 2] ; read [sp + 4] ; mul ; read [sp + 5] ; add ; load-from-heap ; write [sp + 2] ;
But you will see this one macro op is similar to a sequence of 7 instructions written in stack ISA.
Thus, I think stack ISA can treat as CISC architecture with instruction border marking unit.
Maybe algorithm like matching parentheses will be used at border marking.
Note that instruction border is determined by machine implementation, not by ISA like x86.
If a machine can't deal with big macro ops like above, its border marking unit can split stack ISA sequence to fit its macro ops.
read [sp + 2] ; read [sp + 4] ; mul ; //read-read-modify-(write to stack)
read [sp + 5] ; add ; //(read from stack)-read-modify-(write to stack)
load-from-heap ; write [sp + 2] ; //(read from stack)-read-write
Or, since simple stack machine (one issue, in-order) is so tiny that it may fit in a RS entry, you might be able to treat a block separated between jump instructions as one macro op.


treating stack ISA as CISC architecure
Author: Agner Date: 2016-04-14 09:03

Thank you for the reference to Explicit Data Graph Execution (EDGE) https://en.wikipedia.org/wiki/Explicit_ ... _execution
If the EDGE principle can be implemented smoothly, it might be more efficient than splitting a job into multiple threads running in multiple cores. The thread management, synchronization, and communication between threads is very complicated and inefficient in current systems. An alternative would certainly be worth exploring.

However, I don't quite understand how the Hyperblocks are coded and delimited.


Author: A-11 Date: 2016-04-17 01:51
Below recipe is what I'm reaching and can present now.

-- begin recipe --
(0)
Prepare an empty stack "Parser stack".
Each item of stack holds immediate value or loadiing from architectural stack which must be renamed to register file.
But the bottom item of this stack may hold a Hyperblock.

(1-arg)
If next instruction is an immediate or loading from architectural stack, push it into Parser stack.
ex.
Parser stack before : { [sp+3] ; $4 ; add } ; $5 ;
next instruction : [sp+5]
Parser stack after : { [sp+3] ; $4 ; add } ; $5 ; [sp+5] ;

(1-op)
If next instruction is a N-in-1-out (inc, add, fma3, etc) or N-in-0-out ("store [sp+2] top-of-stack", etc) micro op
[1] Pop N items from Parser stack, concatenate them with the next instruction into a Hyperblock.
[2] If items remain in Parser stack, output each of them as individual Hyperblock.

[3-1out] If next instruction is 1-out op, leave the Hyperblock made at [1] on the bottom of Parser stack.
ex.
Parser stack before : { [sp+3] ; $4 ; add } ; $5 ; [sp+5] ; $7
next instruction : add(2-in-1-out op)
outputs :
{ [sp+3] ; $4 ; add } ;
$5 ;
Parser stack after : {[sp+5] ; $7 ; add } ;

[3-0out] If next instruction is 0-out op, output the Hyperblock at [1], thus Parser stack should be empty.
ex.
Parser stack before : { [sp+3] ; $4 ; add } ; $5 ; [sp+5] ; $7
next instruction : store [sp+2] top-of-stack (1-in-0-out op)
outputs :
{ [sp+3] ; $4 ; add } ;
$5 ;
[sp+5] ;
$7 ; store [sp+2] top-of-stack ;
Parser stack after : (empty)

(1-other)
For other cases, 2-out op like "divmod", reaching max length of Hyperblock, control flow instruction and so on.
output each of items in Parser stack as individual Hyperblock, thus Parser stack should be empty.

(2) To iterate, go back to ether of (1-*) depending on next instruction.
-- end recipe --

Though I mentioned before RS entry should be toward individual processor, they must be tiny enough to lack lots of unit.
Anyway, each RS entry in a current conventional CISC processor lacks ability for flow control , "mov" elimination by renaming, nor Out-of-Order execution.
So Hyperblock should not contain "mov" instruction nor concurrent flow like below.
($4 $5 add) ($2 $3 shift-left) // these process have no dependency, thus should detect concurrency.
swap-top-2-item div // "swap-top-2-item" should be eliminated with renaming.
This is the reason Parser stack should become empty every time when it meets a micro op.
Parser stack is parse time emulation of the stack inside a RS entry.
You may point out
$4 ($2 $3 shift-left) add
has no concurrent flow.
I ignore them to make recipe simple to implement easily on hardware.

For ops which have 2 or more outputs, I have not gotten the easy way how to treat them.


Author: Hubert Lamontagne Date: 2016-04-17 14:35
Agner wrote:
Thank you for the reference to Explicit Data Graph Execution (EDGE) https://en.wikipedia.org/wiki/Explicit_ ... _execution
If the EDGE principle can be implemented smoothly, it might be more efficient than splitting a job into multiple threads running in multiple cores. The thread management, synchronization, and communication between threads is very complicated and inefficient in current systems. An alternative would certainly be worth exploring.

However, I don't quite understand how the Hyperblocks are coded and delimited.
I do have a design in store that does what EDGE does, and should be targettable from C++ compilers but it's kinda weird:

- Registers are the accumulator (ac), and a rotating register file with 4 partitions of 16 registers (a0-a15, b0-b15, c0-c15, d0-d15). The ALLOC instruction shifts down register file names, so for instance ALLOC d10..d15 will move down the previous contents of d15 to d9, the content of d14 to d8, d13->d7, d12->d6, d11->d5, d10->d4, d9->d3, d8->d2, d7->d1, d6->d0, and the contents of d0..d5 are lost. The new values in registers d10..d15 are marked as "uninitialized" and instructions that try to read them will stall until the registers are written to. ALLOC is always in the form of aN..b15, bN..b15, cN..c15, dN..d15 and can allocate from multiple partitions at the same time.

- Each non-accumulator register can only be written to once (!). This means that it's basically a form of hardware SSA. This is why there are multiple partitions: values have multiple classes of life duration: extremely short (accumulator), very short (would probably use d0..d15, typically used for multi-use temporary values and merging 2 branches of calculation), loop counters (c0..c15, get rewritten every loop), and then various long lived values like loop constants and stack pointers and the like (aN and bN).

- Instructions come in groups. Each group is a sequence of serial instructions, and operates on the accumulator: the first operation of the group must not have any dependency on the previous state of the accumulator, but then subsequent instructions modify the accumulator. Every instruction always writes to the accumulator, plus optionally also has a store to a register from the rotating register file. Example of group:
mul ac, d14, b15 ;no dependency on the previous state of the accumulator, start of group
sar ac, 16
add ac, d13, st d12 ;result goes in both accumulator and d12
add ac, d15, st d11 ;result goes to both accumulator and d11, group ends

- Memory loads/stores must use [register+immediate] as addressing mode (it's not possible to use the accumulator as part of address calculations), and are separately issued in-order, independent of ALU instruction grouping.

- The ALLOC instruction is also not part of ALU instruction grouping.

- The reason for this weird design is that it makes register renaming very easy: every time an ALLOC instruction comes up, you simply increase the rotation index for the target register partitions and clear the 'value ready' bit for the new register names (and also check that the physical registers you're using are free). Once that is done, every single following ALU operation group is _guaranteed_ to be parallelizable (aside from the memory loads/stores) because every register name is unique!

- Each new ALU instruction group is queued to a free ALU, and each ALU runs one instruction per cycle from its group of cycles. If the instruction's rotating register file operand doesn't have its 'value ready' bit on, then this instruction group stalls until the value becomes ready (in other words, it waits until the value gets stored by an ALU or memory load operation).

- Register file renaming can be done late by the ALUs, since the rotating register file index of each partition is recorded when the group starts. This also means that every ALU can practically have its own ICACHE - they can be _scheduled_ 100% independently from other instructions!

- Multiple ALU groups can be scheduled per cycle, in any order(!). The only thing that has to be done in order is register renaming using the ALLOC instruction and memory loads and stores.

- For the C++ compiler, working from SSA, it has to tag all the operations that are part of a series of operations, to form groups. This is not that hard as far as I can tell: every time a value is only used by the next operation in the sequence, then it can be passed using the accumulator instead. Generally, all single-destination values can be assigned to the accumulator. Since the compiler knows how long each value lives, it can make sure register file renames using ALLOC never wipe out values that are still in use (although it does have to figure out the lifespan of each value).


This design still needs lots of refinement (especially in the area of loads/stores, calling convention...) and is more complex than I'd like (hence the 'this is weird' warning), and probably also fairly out-there, and potentially doesn't gain too much over the traditional out-of-order RISC (if the whole thing is limited by memory performance in particular), and reminds me of the Itanium at times, but it does have the advantage of being relatively implementable and compilable, and in theory there's no limit to the number of ALUs you can run in parallel (I can easily see designs with 4, 8, even more ALUs).


stack ISA versus long vectors
Author: Agner Date: 2016-04-18 00:37
Hubert Lamontagne wrote:
I do have a design in store that does what EDGE does, and should be targettable from C++ compilers but it's kinda weird:

- Registers are the accumulator (ac), and a rotating register file with 4 partitions of 16 registers (a0-a15, b0-b15, c0-c15, d0-d15). The ALLOC instruction shifts down register file names, so for instance ALLOC d10..d15 will move down the previous contents of d15 to d9, the content of d14 to d8, d13->d7, d12->d6, d11->d5, d10->d4, d9->d3, d8->d2, d7->d1, d6->d0, and the contents of d0..d5 are lost. The new values in registers d10..d15 are marked as "uninitialized" and instructions that try to read them will stall until the registers are written to.
Thank you for explaining your idea. It might be a problem that you have only one accumulator.

The best candidate for an independent instruction block is a loop iteration with no loop-carried dependency. I think it would be easier for the compiler in this case to just use a very long vector register with variable length to cover multiple iterations of the loop at once.

The main problem with very long vectors is instructions that move data horizontally across a vector. The latency of horizontal data moves may increase with the vector length. I have an idea to mitigate this problem a little. All instructions that involve horizontal data movement across a vector have information about the distance of the move (e.g. index or shift count) in a separate register or an immediate constant. The scheduler wants to know the latency of the instruction as early as possible. It will be allowed to read the "distance register" at an early stage in the pipeline before the other operands are ready. This value will typically be available early anyway thanks to out-of-order execution. There is probably no way to avoid the data transfer delay, but horizontal moves will be rare anyway. Current designs are already reading registers used in address calculation earlier in the pipeline than operand registers. I want to use a similar mechanism for predicting instruction latency.

It will also be an advantage to know the vector length early so that it can clock gate or power down unused parts of the buses and ALUs to save power.


Author: Hubert Lamontagne Date: 2016-04-19 23:16
Agner wrote:
Thank you for explaining your idea. It might be a problem that you have only one accumulator.
It's on purpose! If you need a second accumulator, that means that your calculations has two branches to it (either from a split or a join), which means it has some parallelism to it, which means you want it to run on more than one ALU to exploit that parallelism, which means you need some sort of register to send values between the ALUs. That register for sending values between ALUs can be a second accumulator, but then that's a lot harder to rename because then the scheduler has to go through the whole stream of instructions and give a whole bunch of different names. What I'm suggesting is that this crazy renaming should be done beforehand, and that's what the whole rotating name single-write register file is there for: it gives a different names to all the renamed versions of the second accumulator beforehand, so that the scheduler has to do renames much less often, and it can do a whole bunch of renames at the same time.

Agner wrote:
The best candidate for an independent instruction block is a loop iteration with no loop-carried dependency. I think it would be easier for the compiler in this case to just use a very long vector register with variable length to cover multiple iterations of the loop at once.
The main problem with very long vectors is instructions that move data horizontally across a vector. The latency of horizontal data moves may increase with the vector length. I have an idea to mitigate this problem a little. All instructions that involve horizontal data movement across a vector have information about the distance of the move (e.g. index or shift count) in a separate register or an immediate constant. The scheduler wants to know the latency of the instruction as early as possible. It will be allowed to read the "distance register" at an early stage in the pipeline before the other operands are ready. This value will typically be available early anyway thanks to out-of-order execution. There is probably no way to avoid the data transfer delay, but horizontal moves will be rare anyway. Current designs are already reading registers used in address calculation earlier in the pipeline than operand registers. I want to use a similar mechanism for predicting instruction latency.
It will also be an advantage to know the vector length early so that it can clock gate or power down unused parts of the buses and ALUs to save power.
Hmm, that's not too hard to do if the vector lengths are in the normal register file, and the SIMD unit is running late compared to the scalar integer unit. Maybe you could have multiple swizzles with different granularities and delay times - like a 1 byte granularity scramble, and then maybe a 4 byte scramble, 16 byte scramble, a 64 byte scramble... Maybe the cpu could look in the swizzle data and figure out how many 0 bits there are and use the coarsest correct granularity.


Author: Agner Date: 2016-04-20 00:08
Hubert Lamontagne wrote:
If you need a second accumulator, that means that your calculations has two branches to it (either from a split or a join), which means it has some parallelism to it
OK, I see.
that's not too hard to do if the vector lengths are in the normal register file, and the SIMD unit is running late compared to the scalar integer unit. Maybe you could have multiple swizzles with different granularities and delay times - like a 1 byte granularity scramble, and then maybe a 4 byte scramble, 16 byte scramble, a 64 byte scramble... Maybe the cpu could look in the swizzle data and figure out how many 0 bits there are and use the coarsest correct granularity.
That's indeed very close to what I have in mind. A permute instruction will have the granularity given by the operand type, one vector output, two vector inputs for input data and indexes, and then a "block size" given by an integer register. For example, if the block size is 16 bytes then data can only be moved within each 16-bytes block of the vector. There has to be a maximum block size anyway because the complexity of the circuit grows with the square of the block size and the latency also grows with the block size. The maximum block size may be implementation dependent and it may be smaller for low granularities.


Author: A-11 Date: 2016-04-18 08:09
Well..., I forgot explaining why the bottom item of Parser stack can hold Hyperblock and other items can't.
Forgive me adding a loaf of explanation below.

-- begin --
(1-op)
If next instruction is a N-in-1-out (inc, add, fma3, etc) or N-in-0-out ("store [sp+2] top-of-stack", etc) micro op
[1] Pop N items from Parser stack, concatenate them with the next instruction into a Hyperblock.
ex.
Parser stack before : { [sp+3] ; $4 ; add } ; $5
next instruction : add(2-in-1-out op)
Hyperblock : { [sp+3] ; $4 ; add ; $5 ; add }
Parser stack after [1] : (empty)
[2] If items remain in Parser stack, output each of them as individual Hyperblock.
[3-1out] If next instruction is 1-out op, leave the Hyperblock made at [1] on the bottom of Parser stack.
ex.
Parser stack after [2] : { [sp+3] ; $4 ; add ; $5 ; add }
-- end --

Thanks for Reverse Polish Notation, you feel free replacing a constant value into an instruction sequence which provides a value.
ex.
$4 ; $3 ; add
can be replaced into
($2 ; $2 ; add) ; ($9 ; $3 ; div) ; add
and more
(($5 ; $3 ; $-13 ; fma3) ; ($1 ; $1 ; add) ; add) ; (($5 ; $4 ; add) ; (return-three-func-addr ; call-from-addr) ; div) ; add
and so on.
But since each instruction sequence has no dependency on others, replacing more than one value give a Hyperblock more than one flow.
I want to avoid concurrent flow in a Hyperblock, thus the recipe allows only one item in Parser stack to hold a Hyperblock.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Proposal for an ideal extensible instruction set

Post by agner »

Author: zboson Date: 2016-04-11 03:24
Hi,

I think it might be worth considering other three operand single rounding mode instructions besides FMA.

As I discovered here
http://stackoverflow.com/questions/3057 ... ubledouble
double-double multiplication can be done with

high = a * b;
low = fma(a, b, -high);

However double-double addition cannot be done so simply. If there was a three operand single rounding mode (a+b+c) instruction then double-double addition would be much faster (actually (a+b-c) is perhaps the most interesting). I realize this is may seem to be a very special case. But double-double is a kind of poor man's quad precision float point. So in that sense it's maybe not such a special case for those interested in quad precision floating point. The only thing holding double-double back (if a instruction set already has FMA) is single rounding mode (a+b-c) and this would be a lot simpler to implement than quad precision floating point support I think.


Author: Agner Date: 2016-04-11 08:04
HarryDev wrote:
I think the proposed ISA needs specific instruction for memory copying, initialization and comparison.
My experience with x86 processors is that these microcoded instructions have very long setup times and that software loops with vector registers are faster in most cases. I want to avoid complex microcode instructions. Instead, I have designed instructions for very fast software loops using variable-length vector registers.

A-11 wrote:
BOOST: Berkeley's Out-of-Order Stack Thingy
Thanks for the interesting reference. As I understand the article, it is more difficult to do out-of-order processing on a stack machine than on a conventional machine.

zboson wrote:

double-double multiplication can be done with

high = a * b;
low = fma(a, b, -high);

However double-double addition cannot be done so simply. If there was a three operand single rounding mode (a+b+c) instruction then double-double addition would be much faster (actually (a+b-c) is perhaps the most interesting).

Interesting idea. Actually, I once used that fma trick to get extra precision in a multiplication. So as I understand your idea, you want to do:

Code: Select all

high = a + b
low = a + b - high // (with extra precision in the intermediate)
I think this would be quite costly to implement in hardware, considering the rare use. Instead, you can do this:

Code: Select all

high = a + b;
if (abs(a) > abs(b)) {
   low = b - (high - a);
} else {
   low = a - (high - b);
}
This does not require any extra hardware.


Author: Hubert Lamontagne Date: 2016-04-11 21:03
There is an algo for doubling the precision of floating point adding:

Kahan summation https://en.wikipedia.org/wiki/Kahan_summation_algorithm

A-11 wrote:
It should be too late.
but since I found a research today, I ask for it.

BOOST: Berkeley's Out-of-Order Stack Thingy
https://www.researchgate.net/publicatio ... ack_Thingy

Same as "register renaming" for register machine, these researcher propose "address renaming" for stack machine to implement out-of-order execution.
So I think there are no reasons to prevent stack machine for performance problem.
Since stack machine instruction sets never bother us with number of registers, I think stack machine's ISA is more extensible than register machine's.
Also, stack ISA is easer for compilers as UCSD p-System showed.
it should also hold true for Just In-time Compiling.
That's an interesting architecture. Though I'm not sure I like how it's basically doing 2 or 3 memory accesses per cycle but then removing these accesses out using aggressive renaming, and that the stack and function-local memory accesses seem to not totally consistent with the "real" memory loads/stores through pointers. (otherwise it would have to stall every time no?)

Come to think of it, are there other "not totally consistent" memory models that are useful, performance wise? (aside from the standard multi-core - with - memory-barriers - and-so-forth stuff)

The one thing I do like about stack based architectures is how they reduce the number of register file read/writes, and intuitively I'd be more interested into a model that (1) also has registers (ie it's not using the stack to store values but rather to keep track of very short lived temporary values - and it has the nice property that the lifespan of values is well known) (2) has a limited stack stack depth - something like 16 values with no automatic spilling (ie just enough for calculations, function calls would still be done using the normal RISC mechanism and explicit memory operations for register/stack spills) (3) you're supposed to keep the stack empty as often as possible so that the CPU can automatically figure out which sections it can run in parallel (each time the size falls to 0, that's a new parallelizable section).


Author: Agner Date: 2016-04-12 00:12
Hubert Lamontagne wrote:
The one thing I do like about stack based architectures is how they reduce the number of register file read/writes, and intuitively I'd be more interested into a model that (1) also has registers (ie it's not using the stack to store values but rather to keep track of very short lived temporary values - and it has the nice property that the lifespan of values is well known) (2) has a limited stack stack depth - something like 16 values with no automatic spilling (ie just enough for calculations, function calls would still be done using the normal RISC mechanism and explicit memory operations for register/stack spills) (3) you're supposed to keep the stack empty as often as possible so that the CPU can automatically figure out which sections it can run in parallel (each time the size falls to 0, that's a new parallelizable section).
That sounds useful, but isn't this the same as x87? The main problem with x87 is that you have to do a lot of register swapping to get each operand to the top of the stack when you need it. Each value gets swapped to different positions several times through its life time so that it gets difficult to track where each value is. I haven't seen any compiler that can handle register variables nicely and keep them on the stack. If you want to avoid all that swapping then you need to access registers that are not on the top of the stack, and then the stack idea sort-of disappears. Is there a better way of doing this?


Author: Hubert Lamontagne Date: 2016-04-12 23:59
Agner wrote:
Hubert Lamontagne wrote:
The one thing I do like about stack based architectures is how they reduce the number of register file read/writes, and intuitively I'd be more interested into a model that (1) also has registers (ie it's not using the stack to store values but rather to keep track of very short lived temporary values - and it has the nice property that the lifespan of values is well known) (2) has a limited stack stack depth - something like 16 values with no automatic spilling (ie just enough for calculations, function calls would still be done using the normal RISC mechanism and explicit memory operations for register/stack spills) (3) you're supposed to keep the stack empty as often as possible so that the CPU can automatically figure out which sections it can run in parallel (each time the size falls to 0, that's a new parallelizable section).
That sounds useful, but isn't this the same as x87? The main problem with x87 is that you have to do a lot of register swapping to get each operand to the top of the stack when you need it. Each value gets swapped to different positions several times through its life time so that it gets difficult to track where each value is. I haven't seen any compiler that can handle register variables nicely and keep them on the stack. If you want to avoid all that swapping then you need to access registers that are not on the top of the stack, and then the stack idea sort-of disappears. Is there a better way of doing this?
Not quite... x87 lacks non-strack registers and isn't designed to help the CPU figure out what sections of the calculation are parallelizable. If you start doing register swapping (like the whole fxch+op thing) then it just becomes a softa register file. Here, the CPU starts a new parallelizable section every time the stack size falls to zero. Each parallelizable section has its own stack, and it can be shown that the sections can't interfere (register file accesses still have to be renamed and memory conflicts still have to be solved though, and branch prediction fail can invalidate speculative sections). A similar architecture can be designed using an accumulator+a register file.

Example code (linear interpolation of 16bit data):

Code: Select all

loop:
push r9
shr 16
push short [r8 + stack*2]
pop r0 ;renames r0

;this section can execute in parallel (once registers have been renamed)
push r9
shr 16
add 1
push short [r8 + stack*2]
sub r0 ;waits after renamed r0
push r9
and 0xffff
mul stack
sar 16
add r0
pop short [r7]

;this section can execute in parallel
push r9
add r10
pop r9 ;renames r9

;this section can execute in parallel
push r7
add 2
dup
pop r7 ;renames r7
cmp r11
branch if lower to loop
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Version 1.01

Post by agner »

Author: Agner Date: 2016-05-10 10:28
The instruction set specification is now updated to version 1.01: www.agner.org/optimize/instructionset.pdf

The instruction set has got the name CRISC1 to indicate the compromise between RISC and CISC.

All user-level instructions are now defined.

The most important change is that the length of a vector is now saved in the vector register itself. This has many advantages. When you need to save a vector register in a function or at a task switch, you only have to save the part of the register that is actually used. If the register is unused then you only have to save a zero for the length. Please see the document for further discussion of the advantages.


Author: Hubert Lamontagne Date: 2016-05-13 16:05
Nice. Do you have any plans for writing an assembler or a simulator?
(or other more involved stuff like a verilog/vhdl implementation or a c compiler... though that's a lot of work!)


Author: Agner Date: 2016-05-14 05:23
Hubert Lamontagne wrote:
Do you have any plans for writing an assembler or a simulator?
(or other more involved stuff like a verilog/vhdl implementation or a c compiler... though that's a lot of work!)
That is the plan. Right now I am working on the ELF object file format for this architecture.

But I am very busy with other projects too, so progress will be slow as long as this is a one man project.

I have an idea for how to implement system calls. It is a system call instruction which takes as input a 64-bit ID number. The upper 32 bits is a module ID which identifies a system module or device driver (The system core has ID = 0). The lower 32 bits identify a function within this module. System add-on modules and device drivers do not necessarily have fixed ID numbers because this would require some central authority to assign these ID numbers. Instead, the program will have to ask for the ID number by giving the name of the module. The functions within a module can have fixed or variable ID numbers.

There will be a system function (with fixed ID number) which takes the names of module and function as input and returns the ID number. The ID number can retrieved in this way before the first call to the function.

The ID number can be put into the program in three ways:
  • The most important system functions have fixed ID numbers which can be inserted at compile time.
  • The ID number can be found at load time in the same way as relocation works. The loader will find the ID number and insert it in the code before running the program.
  • The ID number is found at run time before the first call to the desired function.
The system call instruction can get the necessary ID number either in a 64-bit register operand or in an immediate constant with 32 bits for the module ID and 16 bits for the function ID (assuming that the module has no more than 65000 functions).

The calling convention for system functions is the same as for other functions, using registers for parameters and for return value. Any register not used for a parameter to the system function can be used for the ID of the function.

The system call instruction will change the instruction pointer and stack pointer and proceed to system code. The old values of instruction pointer and stack pointer are saved in special registers, to be restored by a system return instruction.


Author: Harry Date: 2016-06-02 02:13
progress will be slow as long as this is a one man project.
Could I suggest that you move the development incl. discussions of CRISC to GitHub instead? This would no doubt make collaboration easier, allow for pull requests, etc. And allow for topic focused issue discussions so the many different aspects of CRISC can be discussed in different threads.

Practically, I would change the CRISC document to a set of GitHub flavored markdown files. These are easily editable and work well with git version control. In time code, simulators etc. can be added to this project as well.

I believe this would make this project more available to a wider audience and increase collaboration. You could setup an organization for this on github and also use several repositories if this would be better.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Public repository

Post by agner »

Author: Agner Date: 2016-06-02 05:16
Harry wrote:
Could I suggest that you move the development incl. discussions of CRISC to GitHub instead?
You are right. This project has developed to more than I initially expected and it is approaching a level where it makes sense to move it to a public repository. What are the pros and cons of the different public repositories, such as GitHub, SourceForge, Savanna, etc.?

Version 1.02 of the CRISC1 document is underway with more detailed specifications of system calls, memory management, object file format, etc. This will no longer fit into a single file.


Author: Harry Date: 2016-06-02 13:54
Not that I have great amount of experience with other public repositories, but GitHub really has a very good story with regards to Issues and discussions related to this. It is also easy to do pull request and discussion around these which encourage collaboration. Additionally, MarkDown is supported as default and rendered on the web page. It also supports tables, which not all markdowns do. In addition, the GitHub for Windows install is great for working with git from the "Git Shell" powershell command line. Since it is git you will always have a local copy of everything, and can use your own custom git repo for backup or similar, if you ever want to move to something else.

GitHub is also becoming the go-to place for open source projects from Microsoft e.g. .NET Core etc. and many others.

I would not recommend sourceforge at all. It is terrible in my view.

I would recommend GitHub.

Just to give an example you can see how the .NET coreclr repo looks like here: https://github.com/dotnet/coreclr and how docs are an integrated part of it.


Author: Harry Date: 2016-06-02 13:56
You could use pandoc to convert Word docx (if that is your format) to markdown see ronn-bundgaard.dk/blog/convert-docx-to-markdown-with-pandoc/


Author: Agner Date: 2016-06-09 11:53
Harry wrote:
Could I suggest that you move the development incl. discussions of CRISC to GitHub instead?
The name CRISC is taken on Github, but there is no activity whatsoever on the name and no contact address. I don't know if it is possible to take over an inactive github account.

Anyway, the world has enough short acronyms. Does anybody have a suggestion for a another name for this project?
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Rethinking DLLs and shared objects

Post by agner »

Author: Agner Date: 2016-05-20 05:07
Windows systems use dynamic link libraries (DLLs) and Unix-like systems (Linux, BSD, Mac OS) use shared objects (SOs). Both types have a number of disadvantages. I have an idea for replacing DLLs and SOs with something more efficient in the new CRISC architecture.

Let me first explain how the current systems work. A normal program goes through the steps of compiling, linking, and loading. The linker joins multiple program modules and library functions together and adjusts all addresses in the linked program. Any absolute addresses in the code are adjusted according to the placement of each module, and all relative addresses from one module to another are calculated. This process is called relocation. The loader may do the relocation once again in case the program is placed at a memory address different from the address that was assumed in the link process. A DLL or SO is linked and loaded in basically the same way as an executable program.

The main difference between a DLL and a SO is that shared objects allow "symbol interposition". It is possible to override a symbol in a SO by defining another symbol with the same name in the calling program. This feature is intended to mimic the behavior of a static link library, but symbol interposition is hardly ever used and it comes at a high cost. Every access to a function or a global variable needs to go via a procedure linkage table (PLT) or a global offset table (GOT). This applies even to internal references inside the SO if the symbol is globally visible. It would be easy to bypass this time-consuming mechanism if the linker and loader allowed it, but for unknown reasons, they don't.

The advantage of using a DLL or SO is that the loaded code can be shared between multiple running programs. This is rarely saving any memory, however, because the library may contain hundreds of functions while you are only using a few of them. It is not uncommon to load a DLL or SO of one megabyte and use only one kilobyte of it.

Another problem that makes DLLs and SOs less efficient than static libraries is that they are scattered around in each their memory block. Each DLL/SO will use at least two memory pages, one for code and one for data. The scattered memory access makes caching less efficient.

Now, my proposal for the CRISC architecture is to get completely rid of DLLs and SOs. Instead, we will have only one type of function libraries that can be used in three different ways:
  • Static linking. This will work the same way as today.
  • Load-time linking. The library is linked with the executable program by the loader when the program is loaded into memory.
  • Run-time linking. The library is loaded by commands in a running program.
In all three cases, we are loading only those functions from the library that are actually needed.

Load-time linking will be easier with the CRISC system than with existing systems because the CODE and DATA sections are independent of each other in the CRISC system. The CODE (and CONST) sections are addressed relative to the instruction pointer, while the DATA section is addressed relative to a special pointer called the data section pointer (DATAP). CODE and DATA can be placed anywhere in memory independently of each other. If extra library functions need to be linked in at load time, then the CODE and DATA sections of the library functions are simply appended to the CODE and DATA sections of the main program, and any new cross references are resolved. The result will be very efficient because the code and data of the library functions are contiguous with the code and data of the main program, so that caching is improved. There are no intermediate import tables or PLTs to slow down the execution.

Run-time linking is used less often. It is needed only when the choice of library depends on user input to the running program, or when a library is loaded explicitly from a not-compiled script language. The loader can use several different methods when run-time linking is requested:
  • The main program may have reserved extra memory space for the library functions. This information is stored in the header of the executable program file. The library function is accessed through a function pointer which is returned by the load_library function. Any DATA section in the library can be addressed through DATAP, using the normal relocation procedure.
  • If there is no reserved space, or the reserved space is too small, then the loader must place the library function somewhere else in memory. If there is a vacant memory space within a distance of +/- 2 GB from the address in DATAP then the same method as above is used.
  • If there is no vacant space within 2 GB of DATAP then the loader can insert a stub that changes DATAP to point to the DATA section of the library function. The function is called through this stub, which changes DATAP when called, and restores the old value of DATAP on return. If the function can throw an exception then the exception handler needs to restore DATAP as well.
  • The library function can be compiled with a compiler option that tells it not to use DATAP. The function will load the absolute address of its DATA section into a general purpose register and access its data with this register as pointer.
If lazy loading of a program module is desired then use the same method as for run-time linking, or put the lazy module into a separate executable file.

Newer versions of Linux have a feature called Gnu indirect function which makes it possible to choose between different versions of a function at load time depending on, for example, the microprocessor version. This feature will not be copied in the CRISC system because it relies on a PLT. Instead, we can make a dispatcher system to be used with load-time linking. The library can contain a dispatch function which tells which version of a library function to load. The loader will first load the dispatch function (possibly using run-time linking into itself) and call it. The dispatch function returns the name of the chosen version of the desired function. The loader then unloads the dispatch function and links the chosen function into the main program. The dispatch function must have access to information about the hardware configuration, command line parameters, environment variables, and anything else that it might need to choose which version of the function to use.

System functions and device drivers are called by using an ID number rather than a function pointer. This ID number can be resolved at link time, load time or run time just like library functions.

The advantages of my proposal are:
  • There is only one type of function libraries. The same library can be used with any of the three methods: static linking, load-time linking, and run-time linking.
  • Only the part of the library that is actually needed is loaded.
  • The code and data of the library is contiguous with the code and data of the calling program in most cases. This makes memory management simpler, avoids memory fragmentation, and improves caching.
    There are no intermediate import tables, procedure linkage tables or global offset tables to reduce the performance.
Any comments?


Author: cv Date: 2016-05-20 12:33
Hi,

How would you deal with non-reentrant functions (and their global data) belonging to the same library in all your three types?
And subsequent non-exported & helper functions internal to the library that could potentially modify those global data?


Author: Agner Date: 2016-05-20 13:51
cv wrote:
How would you deal with non-reentrant functions (and their global data) belonging to the same library in all your three types?
And subsequent non-exported & helper functions internal to the library that could potentially modify those global data?
Library functions should be thread-safe by default because we are aiming at high performance which often means multiple CPU cores running multiple threads. Of course it is possible to make functions that are not thread-safe or reentrant, whatever your reason might be for doing so. A function with static data is likely to be thread-unsafe unless you take special precautions. My proposal allows functions to have a static data section that can be used for whatever you want. If you want multiple library functions to share the same static data then you can either put them into the same module or put labels on the data with public visibility.


Author: Peter Cordes Date: 2016-05-30 08:06

I like the idea of getting rid of the indirection on every call to a library function; that seems like a good plan. Current software is of course designed for the current situation of lazy dynamic linking. This non-lazy dynamic linking may not do very well with some existing software.

GUIs tend to use a lot of library code. I'm worried that copying so much code into private memory for every process will use significant amounts of memory. It won't be touched a lot of the time, but everything that's reachable from the QT or GTK API functions used by a program has to be read from disk and copied into the process's private memory, not just mmap(MAP_SHARED). This means it can only be paged out to swap space, since the library text isn't backed by a file on disk.

Web browsers build much of their code as libraries. Many browsers use multiple processes instead of just multiple threads, as an extra barrier against vulnerabilities. On Linux, I assume chrome just fork()s, so memory that isn't modified after that is shared. AFAIK, Windows doesn't have an equivalent system call, so each process might be stuck with its own private copy of all the library code.

In summary, I think the current state of desktop software is too bloated for this to be a good idea.

If we hope to ever see any of this proposal get used for real (at some point in the far future?), I think we might need some kind of lazy dynamic linking, or at least shared text segments for libraries. Vendors will always want to be able to sell cheap machines with the minimum amount of memory.

Non-lazy linking sounds great for long-running processes that don't overdo it with libraries (e.g. a web server process), but I'm also worried about higher startup overhead for commands used from shell-scripts. With lazy dynamic linking, code for functions that are never called (because that command-line option wasn't used) doesn't ever have to be read into memory. OTOH, most programs will use the same libc functions, and having them hot in the pagecache means it's cheap for the dynamic linker to memcpy them.

It would be interesting to run some tests comparing shell-script speed (cut/grep/sed/cat/find) with Linux lazy dynamic linking vs. this proposal. I think this could work on x86, or any other arch. Linux supports custom executable file formats, or for this experiment we could use ELF with a different dynamic linker as a custom ELF-interpreter. IDK if we'd want a different library object-file format, though, to make it easy and fast to recursively find all the functions a library-function could itself call.


Author: Agner Date: 2016-05-30 11:11
Peter Cordes wrote:
GUIs tend to use a lot of library code. I'm worried that copying so much code into private memory for every process will use significant amounts of memory.
If the GUI library is big, it may run in a separate process. The cost of this is that communication between library and main program will be slower because it needs to switch the memory map.
most programs will use the same libc functions
I often use static linking of libc. It works fine and it doesn't copy too much code into the executable file. If a GUI library has a proper modular design, it should be possible to link it statically without copying everything. It might be possible to test this with some open source GUI library to see how much code will be copied with static linking.
Linux supports custom executable file formats, or for this experiment we could use ELF with a different dynamic linker as a custom ELF-interpreter.
That's interesting. I guess this requires that you make your own loader. I would still use ELF format but add a custom section type.


Author: Joe Duarte Date: 2016-06-17 00:41
I don't think bloated GUI libraries will be an issue. Remember that Agner is proposing an entirely new ISA. Windows 8 and Mac OS 10.11 will never be ported to this ISA. Let's imagine that Agner's proposal garnered good reviews and wide industry attention. In an optimal scenario, we might see some version of CRISC implemented in silicon in 2023 or so. Realistically, any platforms deployed on it will be new, possibly built from scratch, like unikernels or Microsoft Research's library OSes. GUIs can be implemented very lightly now with immediate-mode libraries like imgui: https://github.com/ocornut/imgui

Statically-linked executables are gaining momentum with Go and, on the C side, musl. A musl executable is much lighter than a glibc one.

Agner, it seems like your proposal could either eliminate the need for tree-shaking, or require a new kind of tree-shaking, depending on implementation details. Would code be tree-shaken by default, in that nothing is included unless it's used, kind of an opt-in vs opt-out scenario?

You'll have to cover the security angles here or no one will accept it. There are lots of pointer memory management add-ons to C right now that make decisions about which kinds of pointers will be stored where and so forth. How will your ISA integrate with those methods? I'm thinking of things like Code Pointer Integrity and SafeStack: http://dslab.epfl.ch/proj/cpi/


Author: Agner Date: 2016-06-18 00:44
Joe Duarte wrote:
GUIs can be implemented very lightly now with immediate-mode libraries like imgui: https://github.com/ocornut/imgui

Statically-linked executables are gaining momentum with Go and, on the C side, musl. A musl executable is much lighter than a glibc one.

Agner, it seems like your proposal could either eliminate the need for tree-shaking, or require a new kind of tree-shaking, depending on implementation details. Would code be tree-shaken by default, in that nothing is included unless it's used, kind of an opt-in vs opt-out scenario?
All function libraries will work like static libraries (*.lib in Windows, *.a in Unix) in the sense that it includes only what is needed.
You'll have to cover the security angles here or no one will accept it. There are lots of pointer memory management add-ons to C right now that make decisions about which kinds of pointers will be stored where and so forth. How will your ISA integrate with those methods? I'm thinking of things like Code Pointer Integrity and SafeStack: dslab.epfl.ch/proj/cpi/
Thanks for the interesting links. You are touching on an important topic. We have to think security into the system right from the beginning.
I have proposed a dual stack system where the call stack is separate from the data stack. A buffer overflow will be unable to compromise the return address of a function. Jump tables for switch/case statements and virtual function pointer tables for C++ polymorphism can be placed in the CONST section which is read-only. What remains is function pointers declared explicitly in the program. These may be placed in a protected static memory area or on the heap at a non-predictable address. An input buffer may be placed in its own sandbox where it is protected from overflow.


Author: Bigos Date: 2016-06-18 04:40
About security, some time ago I had something interesting (or not) in mind that could be useful for a new ISA.

What about making a special "function entry" instruction that has to be placed as a first instruction of every function? The call instructions would check whether they point to this special instruction and bail out if they are not. This way we are no longer able to call into a middle of a function. For tail-calls, we could make an instruction that behaves like jump but also checks for this restriction. I am not sure whether the extra instruction (or maybe a bit of a first instruction?) is worth it, but we could combine it with what is usually placed as a first instruction of a function (like stack manipulation in order to reserve local variables) in order to save space.

We could also place some kind of function metadata there (like used registers, which are for input, which are for output, etc.).


Author: Freddie Witherden Date: 2016-06-02 17:37
With regards to static vs dynamic linking it is probably worth considering the following note by Ulrich Drepper:

https://www.akkadia.org/drepper/no_static_linking.html

who was the maintainer of glibc for several years and the author of the excellent series on "What Every Programmer Should Know About Memory". On balance -- and speaking as someone who comes from the HPC side of things -- I am inclined to agree with his points.

With regards to: "It is not uncommon to load a DLL or SO of one megabyte and use only one kilobyte of it." on most (all?) systems .so's are mmap'ed and so benefit from on-demand paging; hence you only pay for cost for what you use viz-a-viz the page size. Further, while an application may call only a small number of functions from an .so those functions inside of the .so may go on to call a large number of additional -- internal -- functions. These 'iceberg' .so's are very common for GUI libraries and the such like. Of course these internal functions need not be exported and hence should not suffer from a performance penalty beyond that associated with being PIC.


Author: Agner Date: 2016-06-04 03:12
Freddie Witherden wrote:
With regards to static vs dynamic linking it is probably worth considering the following note by Ulrich Drepper
Dynamic linking relies heavily on memory paging. My proposal for CRISC1 is to avoid memory paging completely. I am replacing dynamic linking with load-time linking which can do the same as dynamic linking with respect to updating a library without updating the calling program. There is even a feature for choosing between different libraries or functions at load time. Very large libraries, such as GUI frameworks, can be loaded as a separate process.

There are significant performance costs to DLLs and SOs which I want to avoid. Each DLL/SO has its own memory pages for code and data. You will be wasting a whole 4kB page even if the function uses only a few hundred bytes. The data section of a DLL/SO cannot be shared. The code section can be shared in most cases, but it is not shared if it contains relocated cross-references. The amount of memory that you are saving by sharing functions between multiple processes is very small compared to the amount of memory that you are wasting by loading a lot of other functions that your program is not using.

The dynamic libraries are scattered around in memory which means poor caching. Many functions are placed at the start of a memory page. This means that cache line sets with low numbers are used disproportionately more, or you need a more complicated cache design to distribute the load on cache line sets more evenly.

If a DLL contains many functions and some of these functions - not the ones you are using - link to other DLLs, then you will load these other DLLs unnecessarily, or suffer the unpredictable response time of lazy loading.

Unix-style shared objects have further performance losses due to the requirement of the so-called position-independent code, which actually involves something completely different, namely symbol interposition. 32-bit x86 shared objects have an additional performance problem because the instruction set has no position-independent addressing mode.


Author: Freddie Witherden Date: 2016-06-04 13:49
Out of interest have you checked out the work of Darek Mihocka who some years ago wrote a multi-part series on fixing x86 (and, interestingly, proposed eliminating hardware paging):

www.emulators.com/docs/nx03_10fixes.htm
www.emulators.com/docs/nx04_malloc.htm
www.emulators.com/docs/nx05_vx64.htm

With regards to the cost of PIC I do not believe that it is as high as it used to be. Case in point the majority of executable shipped by modern distributions are PIC as a means of improving the effectiveness of ASLR. Yet, when this change happened, I do not recall seeing complaints from users or any serious performance degradation.


Author: Agner Date: 2016-06-06 14:25
Freddie Witherden wrote:
have you checked out the work of Darek Mihocka who some years ago wrote a multi-part series on fixing x86 (and, interestingly, proposed eliminating hardware paging)
Thank you for the interesting links. His experiments show that the costs of the complicated memory management and the huge page tables is even higher than I thought. I agree with him that we need a fundamental redesign.

Many of the security problems that Darek discusses can be solved with CRISC1. The memory map in my proposal is so small that it is no problem to give each thread in a process its own memory map and its own private data space that cannot be accessed from other threads. Do you know if any other systems have memory protection between threads? I can't find any (other than emulators). Of course we have thread-local storage, but that does not guarantee that one thread cannot corrupt data belonging to another thread in the same process in case of program bugs or malicious hacking.

I don't see any reason why different threads should have access to each other's private data. Synchronization structures (e. g. semaphores) and resource sharing between threads should go through the shared main memory of the application, not the private memory of one thread.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Is it better to have two stacks?

Post by agner »

Author: Agner Date: 2016-06-05 13:26
In the beginning of this thread I argued against having a link register. Storing return addresses on the stack is simpler.

Now I wonder if it is better to have two stacks: a call stack for return addresses and a data stack for the local variables of functions. The call stack will be quite small in most cases because the nesting level of function calls is limited in most programs. We can have a small rolling stack on the chip which is used most of the time. If the on-chip stack overflows then it must be spilled to memory. Let's take an example: we have a rolling stack on the chip with 16 entries. We are running a program where function calls are nested 20 deep. The on-chip stack will be copied to memory at call level 17. The on-chip stack entries are overwritten one by one (oldest first) on each deeper call. After call number 20 there will be 4 entries that are overwritten. The first 16 returns after the deepest call can take the return addresses from the on-chip stack. We don't have to reload the on-chip stack from memory until we come down to level 4. We will never have to spill the on-chip stack to memory inside a loop more than on the first iteration unless there are very deep function nesting or recursive functions inside the loop. In other words, the costs of spilling the on-chip stack to memory are minimal because it will not occur repeatedly in a loop except in recursive functions.

In fact, modern microprocessors already have such a rolling call stack on the chip. It is used for predicting return addresses. We might as well use this structure as the genuine call stack rather than just a shadow stack used for prediction. The prediction of return addresses will then be simple and perfect, of course.

There is also a security advantage to having a separate call stack. The return address of a function cannot be overwritten by software bugs or malicious buffer overflow attacks. Legitimate attempts to change the return address of a function will also be prevented, of course, but this is bad programming anyway because it wrecks the prediction mechanism.

The mechanism of the on-chip stack can be hidden in special registers. The application does not need to access the stack pointer of the call stack, except for stack unwinding in the exception handler.

The cost of having two stacks is the complexity of saving the on-chip stack to memory when it overflows. This can be implemented in hardware or software. Memory management will also be a little more complex because there are two stacks that can overflow. The size of both stacks can be predicted by using the method explained in my document, except for recursive functions.

I will propose, tentatively, to allow both principles - one stack or two stacks - in the CRISC1 architecture. It does not matter to the software whether there is one or two stacks except when function parameters are saved on the stack. A function needs to know the addresses of its parameters relative to the stack pointer, and this depends on whether there is a return address in between. It is rare that we will have parameters on the stack because the first 16 general purpose registers and the first 16 vector registers can be used for function parameters.

If we want the same software to be compatible with both one-stack and two-stack systems then we need to solve the problem of the address of parameters on the stack, however rare it may be. The simplest solution is to put an empty space on the data stack where the return address would be if we have a separate call stack. But I want to suggest a smarter solution: don't put parameters on the stack. If there are more parameters than registers then put the extra parameters in a list and use one register to point to this list. This solution is simple and efficient. We are getting rid of the old discussion of which order of parameters on the stack is better, and whether the stack should be cleaned up by caller or callee.

So this is my proposal. Small simple systems can have one unified stack, and large systems where performance or security is important can have two stacks. The size of the on-chip call stack is implementation dependent. The calling convention is changed so that parameters are never saved on the stack. Application programs don't need to care whether there is one or two stacks, but the stack unwinding mechanism in the exception handler needs to know, of course.


Author: Hubert Lamontagne Date: 2016-06-07 16:46
Having a second stack in a small separate memory just for IPs does sound workable. It does have the advantage that CALL and RET aren't memory operations anymore, which means they take a single micro-op, and you get real CALL/RET opcodes instead of having to decompose them into jump-and-link+store LR and load LR+jump LR. So given the appropriate hardware implementation, it could reasonably translate into a small speed gain for code that does a lot of function calling (you're using one less data cache port and micro-op on every CALL and every RET).

This means that it could reasonably be worth the extra hardware complexity - I think it would probably need a write buffer and register-renamed SP so that it can be rolled back in case of a branch prediction failure/page fault/unpredicted reordered store to already loaded memory address/etc, so you have to factor this into the cost.


Author: Eden Segal Date: 2016-06-13 22:40
What about debugging? Putting stuff on the cpu is good for performance, but if the table is not public for viewing, there are 16 function calls you don't know about.
While you can say it will be worked around in debug mode, debugging a optimized application is still necessary, and really hard on the debugger as it is.


Author: Agner Date: 2016-06-13 23:36
Eden Segal wrote:
What about debugging?
No problem. There will be instructions for accessing the on-chip call stack. These instructions must be accessible to debugger and exception handler. They may or may not be accessible to the application in user mode.


Author: Hubert Lamontagne Date: 2016-06-14 21:28
The other reason I can think of to have user-accessible stack is supporting coroutines.


Author: Agner Date: 2016-06-14 23:59
Hubert Lamontagne wrote:

The other reason I can think of to have user-accessible stack is supporting coroutines.

If coroutines are implemented with a call stack, it would be filled up. Are you thinking about removing an entry from the call stack before calling another routine? The same functionality can be obtained with a tail call or a state machine.

The next version of my proposal will have a calling convention that makes tail calls possible in all cases. In the rare case that there are more than 16 integer parameters or 16 vector parameters to a function, the remaining parameters will not be stored on the stack but in a list pointed to by a register. The same method applies to a variable arguments list.


Author: Hubert Lamontagne Date: 2016-06-15 19:15
Agner wrote:

Hubert Lamontagne wrote:
The other reason I can think of to have user-accessible stack is supporting coroutines.
If coroutines are implemented with a call stack, it would be filled up. Are you thinking about removing an entry from the call stack before calling another routine? The same functionality can be obtained with a tail call or a state machine.

The next version of my proposal will have a calling convention that makes tail calls possible in all cases. In the rare case that there are more than 16 integer parameters or 16 vector parameters to a function, the remaining parameters will not be stored on the stack but in a list pointed to by a register. The same method applies to a variable arguments list.
Well, within a high-level language engine you can completely bypass the normal function calling, which lets you replace all execution flow with a state machine or do tail calls for functional programming yes. I was thinking of more like the context of something like C++ or ASM, where you're stuck with calling conventions and where coroutine handling libraries often allocate a separate stack for each coroutine using malloc(), then swap around the stack pointer.

As for the cases where you have more than 16 integer or vector/float parameters, how do you allocate that list for extra parameters? Isn't the simplest way to do that to allocate it on the stack? Or would you call malloc()?


Author: Agner Date: 2016-06-15 23:43
If you want multiple stacks for coroutines, why not use multiple threads? Anyway, a coroutine switch would work much the same as a thread switch does in the operating system.

Hubert Lamontagne wrote:
As for the cases where you have more than 16 integer or vector/float parameters, how do you allocate that list for extra parameters? Isn't the simplest way to do that to allocate it on the stack? Or would you call malloc()?
The caller can put the list anywhere it wants. Possibly on the stack. My point is that the called function will not address its parameters relative to the stack pointer. We get rid of the old discussion about which order of parameters on the stack is more logical and who should clean up the stack.


Author: Hubert Lamontagne Date: 2016-06-16 14:22
Agner wrote:
If you want multiple stacks for coroutines, why not use multiple threads? Anyway, a coroutine switch would work much the same as a thread switch does in the operating system.
Yeah, the catch is that a thread switch goes through the thread scheduler. That's totally fine if you're doing something that doesn't need any tight timing (ex: server software), but it's a problem in very real time apps - I'm thinking of the case of trying to write a video game where every object gets a time slice on every frame, you end up with hundreds of threads and if the OS decides to put even just one of them in the freezer for a few dozen ms, you get frame stutter... Though TBH games aren't normally written this way anyways (plus it makes saving/restoring the game state hard).


Author: Agner Date: 2016-06-16 23:44
Hubert Lamontagne wrote:
the catch is that a thread switch goes through the thread scheduler. That's totally fine if you're doing something that doesn't need any tight timing (ex: server software), but it's a problem in very real time apps
You are right. We need to think about real time applications. Is there anything special we need to do to support real time operating systems? It would be nice to be able to reserve one or more CPU cores in a multicore processor for a particular critical task, but that is an operating system issue. The hardware just needs to support fast task switching. We can have extremely fast task switching if the CPU has multiple on-chip memory maps that it can switch between.


Author: Hubert Lamontagne Date: 2016-06-17 10:07
Agner wrote:
Hubert Lamontagne wrote:
the catch is that a thread switch goes through the thread scheduler. That's totally fine if you're doing something that doesn't need any tight timing (ex: server software), but it's a problem in very real time apps
You are right. We need to think about real time applications. Is there anything special we need to do to support real time operating systems? It would be nice to be able to reserve one or more CPU cores in a multicore processor for a particular critical task, but that is an operating system issue. The hardware just needs to support fast task switching. We can have extremely fast task switching if the CPU has multiple on-chip memory maps that it can switch between.
It's a bit of an application-specific thing.

For instance, I'm 99% sure MacOS has a special case in its scheduler for the audio thread, so that when the sound hardware sends a "the sound buffer is 50% empty, fill the other half" interrupt, the sound code runs straight off the interrupt and literally outprioritizes EVERYTHING ELSE (including hardware drivers!), completely disables priority inversion, completely bypasses any sort fair time slice allocator and so forth. If the audio thread falls on a locked mutex, it might even elevate the thread holding that mutex to this crazy "screw everything else" priority - this is necessary to use mutexes or things locked by mutexes like malloc() in the audio thread, otherwise trying to lock the mutex might result in a task switch to a low priority GUI thread which then gets preempted, and then you get an audio glitch.

In Windows, the scheduler is generally designed for networking so you generally get screwed over: at very least, you need to install a special app to get low latency audio (ASIO4ALL), you need to spend half an hour finetuning buffer length, any badly written driver can completely make it impossible (for instance there was an Intel network driver that would periodically lock everything for 5ms), and even when it does work, all other applications get no sound because you have to bypass the sound mixer because it runs at way too low priority (and on way too large blocks). In Linux, afaik this sort of scheduling is possible (although it's not really meant for it), but requires special flags in the thread scheduler priority and in the mutexes (and possibly a kernel patch), and there's a video from Android OS engineers about getting a low latency audio path in Android using that (tdlr version: it took some work and many versions!).

Another case I can think of is game consoles... if you have 8 cores, does it make any sense to run ANYTHING ELSE than the currently played game on 7 of those cores?


Author: Freddie Witherden Date: 2016-06-22 11:40
With regards to exposing a second stack it is worth considering a recent proposal from Intel term Control-flow enforcement technology: https://software.intel.com/sites/defaul ... review.pdf

The principle is simple: require that the instruction immediately following a jump is an ENDBRANCH instruction (a NOP on current systems). There are also a series of extensions to enable the shadow stack to be manipulated by debuggers and the like. It seems like a reasonable model of how someone would go about exposing and managing a second return call stack.

With regards to supporting two operating modes (with and without a second stack). My thoughts here are that it is simply not worth it. If your platform does not support some sort of hardware functionality to prevent these ROP type attacks then compilers will kludge around it on your behalf. Case in point are the stack smashing protection technologies which are present in many compilers. Obviously, system developers feel that the impact on performance is more than offset by the resulting improvements in security.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Now on Github

Post by agner »

I have now moved this project to Github as some of you recommended. This will facilitate the collective development of software toolchain and hardware implementation.

The name CRISC was taken, so I have changed the name to ForwardCom. It stands for forward compatible computer system.

I have converted the manual to LaTex so that Github can show diffs. The pdf manual is built from many tex files. The pdf manual version 1.02 is here https://github.com/ForwardCom/manual/ra ... ardcom.pdf.

New in version 1.02:
  • Security features added.
  • Support for dual stack.
  • Some instructions and formats modified, including more formats for jump and call instructions.
  • System call, system return and trap instructions added.
  • New addressing mode for arrays with bounds checking.
  • Memory management and ABI standards described in more detail.
  • Instruction list in comma separated file instruction_list.csv to be used by assemblers, emulators, debuggers, etc.
  • Object file format defined in file elf_forwardcom.h
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Proposal for instruction set - now on Github

Post by agner »

Author: Agner Date: 2016-06-26 02:48
The discussion in the thread Proposal for an ideal extensible instruction set has been very fruitful and led to a lot of new ideas. I don't know where this will lead us, but the project looks so promising that it is worth pursuing further.

I have put the project on Github to facilitate collective development of the instruction set, software toolchain, system standards, and hardware implementations. The address is https://github.com/ForwardCom. The development of hardware should be done in collaboration with the Opencores forum.

The name CRISC was taken, so I have changed the name to ForwardCom. It stands for "forward compatible computer system".

The latest version of the manual is at https://github.com/ForwardCom/manual or http://www.agner.org/optimize/#instructionset. I have also reserved the domain name http://www.forwardcom.info.

The ForwardCom project includes both a new instruction set architecture and the corresponding ecosystem of software standards, application binary interface (ABI), memory management, development tools, library formats and system functions. Here are some highlights:
  • The ForwardCom instruction set is a compromise between the RISC and CISC principles, combining the fast and streamlined decoding and pipeline design of RISC systems with the compactness and more work done per instruction of CISC systems.
  • The ForwardCom design is scalable to support small embedded systems as well as large supercomputers and vector processors without losing binary compatibility.
  • Vector registers of variable length are provided for efficient handling of large data sets.
  • Array loops are implemented in a new flexible way that automatically uses the maximum vector length supported by the microprocessor in all but the last iteration of a loop. The last iteration automatically uses a vector length that fits the remaining number of elements. No extra code is needed to deal with remaining data and special cases. There is no need to compile the code separately for different microprocessors with different vector lengths.
  • No recompilation or update of software is needed when a new microprocessor with longer vector registers becomes available. The software is guaranteed to be forward compatible and take advantage of the longer vectors of new microprocessor models.
  • Strong security features are a fundamental part of the hardware and software design.
  • Memory management is simpler and more efficient than in traditional systems. Various techniques are used for avoiding memory fragmentation. There is no memory paging and no translation lookaside buffer (TLB). Instead, there is a memory map with a limited number of sections with variable size.
  • There are no dynamic link libraries (DLLs) or shared objects. Instead, there is only one type of function libraries that can be used for both static and dynamic linking. Only the part of the library that is actually used is loaded and linked. The library code is kept contiguous with the main program code in almost all cases. It is possible to automatically choose between different versions of a function or library at load time, based on the hardware configuration, operating system, or user interface framework.
  • A mechanism for calculating the required stack size is provided. This can prevent stack overflow in most cases without making the stack bigger than necessary.
  • A mechanism for optimal register allocation across program modules and function libraries is provided. This makes it possible to keep most variables in registers without spilling to memory. Vector registers can be saved in an efficient way that stores only the part of the register that is actually used.

Author: Joe Duarte Date: 2016-07-04 01:49
Hi Agner – Can you recap how you resolved the objection that Hubert had to unified int/floating point registers? He had argued for the benefits of dedicated FP registers, but I can't find the resolution (the thread became quite long).

Other questions:

2. How do you situate your ISA in the context of the current trend toward heterogeneous computing, GPGPU, AMD's HSA, OpenCL, OpenMP, OpenACC, FPGAs, etc.? Will the new ISA help, hinder, or be neutral to heterogeneous programming?

3. This is an interesting time to develop a new ISA because of the oft-claimed end of Moore's Law scaling. How do the anticipated physics of the next 10 – 20 years of microprocessor design impact ForwardCom? Is the physical substrate irrelevant? (I'm thinking of III-V semiconductors, mostly) Or is ForwardCom shaped by present-day silicon and fab assumptions? It seems like some aspects of an ISA must be affected by the underlying physics and engineering constraints. For example, whether unified registers can be efficiently implemented, or whether orthogonal instructions can be efficiently implemented, and definitely how variable-length vector registers will perform.

4. Is ForwardCom extensible to a unified RAM (volatile) and non-volatile memory space? There are a few proposals out now, including Moneta D, that rethink the file system and make calling solid state storage a lot like calling a memory address (or just the same thing). They mostly hinge on faster-than-SSD media, like Intel and Micron's new 3D XPoint, phase-change memory, etc. but you could probably do it with fast SSDs.

I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cache (an idea which Intel has implemented a bit), but much, much smaller. The programmer might not even need to know or designate the constants – it could be empirically determined by the compiler. This would only help certain kinds of applications, but that's true of many things – it would be a way to reduce code size but more importantly to have a sticky, stable cache immune from misses and eviction. An expanded version of this would be to be able to define and save specific, short code sequences. (I see your Chapter 6, but it seems like a placeholder for now.)


Author: Agner Date: 2016-07-04 05:47
Joe Duarte wrote:
Hi Agner – Can you recap how you resolved the objection that Hubert had to unified int/floating point registers?
Huberts arguments are here http://www.agner.org/optimize/blog/read.php?i=421#472.

My reason for using the same registers for floating point scalars and floating point vectors is that floating point code often contains calls to mathematical function libraries. Such a library can use variable-length vectors for parameters and results. The same library function can be called with a scalar or a vector because a scalar is simply handled as a vector with one element. This will make it easy for the compiler to vectorize the code. ForwardCom has a separate register set for integer scalars. These are useful for pointers, loop counters, array indexes, etc. that don't need vectors, and they are easier to save and restore than vector registers.
How do you situate your ISA in the context of the current trend toward heterogeneous computing, GPGPU, AMD's HSA, OpenCL, OpenMP, OpenACC, FPGAs, etc.? Will the new ISA help, hinder, or be neutral to heterogeneous programming?
People are sometimes doing non-graphic calculations in a GPU. This is complicated and awkward and has many problems with data transfer and synchronization. ForwardCom will eliminate this need. You may have a system with two sets of ForwardCom processors, one for graphics and one for everything else, but in most cases you will just have a system with several identical all-purpose ForwardCom processors.
This is an interesting time to develop a new ISA because of the oft-claimed end of Moore's Law scaling. How do the anticipated physics of the next 10 – 20 years of microprocessor design impact ForwardCom? Is the physical substrate irrelevant?
The physics affects the clock frequency and the maximum vector length that can implemented efficiently. Some kind of layered or 3-D chip could possibly help making very long vectors more attractive, because it reduces physical distances on the chip.
Is ForwardCom extensible to a unified RAM (volatile) and non-volatile memory space? There are a few proposals out now, including Moneta D, that rethink the file system and make calling solid state storage a lot like calling a memory address (or just the same thing). They mostly hinge on faster-than-SSD media.
Yes. The memory management system of ForwardCom divides a program's memory into two blocks: (1) Code and constants, (2) read/write data including static data, stack and heap. Block (1) is addressed relative to the instruction pointer, while block (2) is addressed relative to a special register called data section pointer (DATAP) and the stack pointer. Nothing in one block is addressed relative to something in the other block. Each of the two blocks can be placed at any address independently of the other block. We can easily place block (1) in non-volatile memory if only it has fast read access. It doesn't need fast write access. Block (2) should be placed in volatile RAM. Microcontrollers with Harvard architecture are like this anyway.
I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cache
ForwardCom can have constants embedded in instructions. This includes integer constants of all sizes and floating point constants of half, single and optionally double precision (requires 1, 2 and 3 word instruction size, respectively). Your idea would require an extra register set for saving and restoring data that will be needed later. This extra register set would just require two extra instructions for copying data from a normal register to an extra register and vice versa. This could be useful for many purposes and it would reduce the pressure on the data cache. The only problem I can see is that it will be an extra complication to the function calling convention because we need to know whether a function modifies the extra registers or not.
An expanded version of this would be to be able to define and save specific, short code sequences. (I see your Chapter 6, but it seems like a placeholder for now.)
That would be more complicated.


Author: Hubert Lamontagne Date: 2016-07-06 09:51
A couple questions:

- What's the exact algo for determining if an OP contains a memory load?

- What's the exact algo for determining if an OP contains a memory store?

Generally you need to know this as soon as possible in the pipeline, because a lot of out-of-order CPUs do the memory ops in-order well before the other stuff. And even when memory ops are reordered, they need a separate reordering to preserve the proper store sequence. (note: this must include ALL cases)

Bonus question:

- What's the exact algo for determining if an OP is integer or FPU/vector?

Same as above, but this is to figure out which ops go into the integer reg-rename + queue + ALUs + retirement, and which ones go to the FPU/vector side.


Author: Agner Date: 2016-07-06 11:05
Hubert Lamontagne wrote:
What's the exact algo for determining if an OP contains a memory load?
Thanks for the questions. This is important for efficient decoding.

The format is determined by the instruction length (IL) and mode bits, see table 3.14 in the manual. The following formats have a memory operand: 0.4 - 0.9, 2.0, 2.2, 2.4.x, 2.8.x (see table 3.14) and two tiny instructions (see table 4.6 and 4.7).

The only mandatory instructions that have memory store are the store instruction (see table 4.5) with the same formats as above, and two tiny instructions (table 4.6 and 4.7). Other instructions with memory store are optional, listed in table 4.10.
What's the exact algo for determining if an OP is integer or FPU/vector?
General purpose registers:
  • The mode field is 0 or 1 when ignoring M (format 0.0, 0.1, 0.8, 0.9, 1.0, 1.1, 1.8, 2.0, 2.1, 2.8.x, 2.9, 3.0, 3.1)
  • Format 2.6
  • Tiny instruction with OP1 < 16
  • Jump instructions with no M field (format 1.5, 2.7.2, 2.7.3, 2.7.4) or M = 0.
  • Registers operands for pointer, index and vector length are always general purpose registers
Vector registers:
  • The mode field is 2-7, except format 2.6
  • Tiny instruction with OP1 ≥ 16
  • Jump instructions with an M field (format 1.4, 2.7.0, 2.7.1, 3.0.1) and M = 1
Jump instructions also have a separate category:
  • Format 1.4 and 1.5
  • Format 2.7 with OP1 = 0-4
  • Format 3.0 with OP1 = 1
As you can see, I have tried to make the rules reasonably consistent, but some exceptions to the rules have been necessary for the sake of making instructions as compact as possible.


Author: Hubert Lamontagne Date: 2016-07-07 01:45
Some questions about your memory model:

- For every Load/Store, how is the CPU going to differentiate which memory map entry it falls into? If you have 16 memory map entries, does it compare every load/store address to all 16 block start addresses? (which requires 16 comparators in hardware if you want to calculate it in a single cycle?! or multiple cycles of latency and 4~5 comparators?!?)

- If I understand your memory allocation scheme correctly, every program's heap size is determined beforehand, and when it grows beyond that size, you absolutely have to use up one more memory map entry? Wouldn't that use up memory map entries really fast and go through hundreds of memory map entries for complex applications like a web browser? And then force the copy of hundreds of megabytes of data when something like a video game runs out of memory map entries?

- Wouldn't you at least require memory blocks to be aligned to L1 cache way size? (usually 4k~32k on modern CPUs?)... Or at very least, wouldn't it have to be aligned to L1 cache line size? (32~64 bytes typically?)... The problem with non-cache-aligned memory blocks (and more specifically addends) is that it forces you to add up this memory offset after calculating memory address but before reading it from the cache, since you no longer have a 1:1 relationship between cache lines and virtual addresses. Forcing alignment lets you do this offsetting *in parallel* with the cache access, which saves at least 1 full cycle of cache latency on, like, everything. The other option is to use a virtually-addressed cache, but this has the downside that it introduces aliasing problems (and you have to flush the cache when the mapping changes).


Author: Agner Date: 2016-07-07 02:56
Hubert Lamontagne wrote:
Some questions about your memory model:

- For every Load/Store, how is the CPU going to differentiate which memory map entry it falls into? If you have 16 memory map entries, does it compare every load/store address to all 16 block start addresses? (which requires 16 comparators in hardware if you want to calculate it in a single cycle?! or multiple cycles of latency and 4~5 comparators?!?)
Both methods are allowed, or some hybrid. Comparators are not that expensive so we can probably afford to have 16 comparators. It takes several clock cycles to fetch a memory operand from cache or write a memory operand to cache, so it is also possible to use a binary tree algorithm with 2-3 comparisons per clock cycle while fetching the operand speculatively at the same time. Most memory accesses will use the same one or two map entries as previous accesses so we can save power by starting each search where the last search ended.
If I understand your memory allocation scheme correctly, every program's heap size is determined beforehand, and when it grows beyond that size, you absolutely have to use up one more memory map entry? Wouldn't that use up memory map entries really fast and go through hundreds of memory map entries for complex applications like a web browser?
This is a weak spot indeed. I am imagining a scheme where the heap is extended with memory blocks that grow exponentially in size. If a 1 MB heap is exhausted you allocate 2 or 4 MB more. If they get exhausted you allocate 16 MB more, etc. In addition to this, we should do everything to predict the heap size. As a last resort, move data if the memory becomes too fragmented (this will cost a delay, of course, but less that the delay of swapping memory to disk).

With these methods, we should still be able to keep the memory map so small that it is more efficient than a TLB with fixed page sizes. The maximum size of the memory map is implementation dependent.
Wouldn't you at least require memory blocks to be aligned to L1 cache way size? (usually 4k~32k on modern CPUs?)... Or at very least, wouldn't it have to be aligned to L1 cache line size? (32~64 bytes typically?)
In most cases, yes. It depends on how the cache is designed. I have just specified 8 bytes as a minimum that all implementations must obey, including small systems with no cache. Smaller memory blocks may be used for special purposes such as shared memory blocks and protected sandboxes for secure input buffers. Maybe I should indicate more clearly the the 8 bytes is a minimum, not a requirement.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Whole-function vectorization and conditionals

Post by agner »

Author: Sylvain Collange Date: 2016-08-15 04:00
Agner wrote:
My reason for using the same registers for floating point scalars and floating point vectors is that floating point code often contains calls to mathematical function libraries. Such a library can use variable-length vectors for parameters and results. The same library function can be called with a scalar or a vector because a scalar is simply handled as a vector with one element. This will make it easy for the compiler to vectorize the code. ForwardCom has a separate register set for integer scalars. These are useful for pointers, loop counters, array indexes, etc. that don't need vectors, and they are easier to save and restore than vector registers.
This looks quite elegant. The vector length is passed implicitly to the caller without any change to the ABI.

How do you handle conditionals, though? For instance:

Code: Select all

for (int i = 0; i < N; i++) {
    if(a(i) != 0)
        b(i) = sin(a(i));
}
Does the ABI define a register as the "current mask" (GPU-like)? Or should the caller compact input vectors to have only contiguous elements, and expand the output vectors (vector processor-like)?


Author: Agner Date: 2016-08-15 06:17
Sylvain Collange wrote:
Does the ABI define a register as the "current mask" (GPU-like)? Or should the caller compact input vectors to have only contiguous elements, and expand the output vectors (vector processor-like)?
Vector register r1-r7 can be used as masks. The normal solution for your example would be to calculate the sin function on the whole a vector and then merge the result, using a mask for (a(i) != 0).

If you want to mask off some elements throughout the calculation of the sin function then the library function would need an extra parameter for the mask. (Some Intel math libraries actually have this). You wouldn't save any time by masking off elements during the calculation. The only reason I can see to have a mask on library functions is to avoid interrupts (traps) if floating point exceptions are enabled. The sin function wouldn't generate exceptions, but the logarithm function would in your case. The recommendation is to detect floating point errors as NAN's and INF's rather than using exceptions, because exceptions depend on the vector length. If you replace sin by log in your example then the log function would generate INF or NAN for non-positive values, and these values will be removed by the mask operation after the function call. If you insist on having floating point exceptions turned on, for whatever reason, then you might want a version of the log function with a mask or you could change the non-positive values before the call to the log function.

Compacting the vector, as you propose, might be useful if the vector is very sparse (only few of the elements are used) and you could compact several vectors into one before calling the function. However, compacting vector elements takes time so this would be useful only for very time-consuming functions.


Author: Sylvain Collange Date: 2016-08-15 07:35
Thanks. I was considering the more general case of vectorizing arbitrary functions, including functions with side effects. There is no reason to restrict function-level vectorization to a few hand-written math functions, is it?
As the compiler does not know in general whether a particular function has side effects or can raise exceptions, my point is that the mask could be part of the ABI, just as vector length (implicitly) is. This would enable vectorization-agnostic composition of function calls.

By the way, have you considered the micro-architecture and compiler implications of using a subset of vector registers as masks? It seems to introduce quite a lot of complexity:

- Instructions take up 5 input operands from vector registers (RS, RT, RU, Mask, RD). The old register value RD is required to merge the result.
Supporting 5-input instructions in a wide out-of-order superscalar is out of question with the current state of technology: executing 3 masked vectors instructions/clock would require 15 read ports in the SIMD register file, plus the ports needed for the load/store unit. Worse, making the operation dependent on the former value of the architectural destination register defeats the purpose of register renaming: write-after-read dependencies are still there.
A more acceptable implementation would break down each masked instruction into 2 µops: first a computation on the whole vector, then a merge (I believe that is how Knights Landing and Skylake handle masked AVX-512 instructions). This has a serious performance impact, and still does not completely eliminate write-after-read dependencies.

- The dependency with the old destination value and the merging operation could be avoided when unused lanes are set to 0 instead of the old value. Unfortunately, that information is encoded in the contents of the mask register itself, so the hardware does not know it early enough.

- More generally, masks encode control flow information, while vector registers encode data flow information. They are not needed at the same stages in the pipeline. Execution logic does not need mask (except as an optimization for saving power), and control logic does not need data. Encoding control flow information and data flow information in the same registers makes decoupling these stages much harder.

- From the compiler perspective, having some instructions that can only access a subset of registers makes register allocation much harder.


Author: Agner Date: 2016-08-15 10:53
Sylvain Collange wrote:
my point is that the mask could be part of the ABI, just as vector length (implicitly) is. This would enable vectorization-agnostic composition of function calls.
A function may have an unlimited number of input parameters and an unlimited number of outputs (through pointers or references or class members or whatever, all of which could in principle have different masks. That would add a lot of complexity to the ABI for something that is rarely needed. I prefer to specify a mask as an explicit function parameter in the rare cases that it is needed.
By the way, have you considered the micro-architecture and compiler implications of using a subset of vector registers as masks?
A lot of instructions in current instruction sets are using certain registers for specific purposes. I don't think this is a big problem. It is a much bigger problem to have a separate set of registers for masks only, as AVX512 has, in my opinion.
Instructions take up 5 input operands from vector registers (RS, RT, RU, Mask, RD). The old register value RD is required to merge the result.
No, the first source operand is required to merge the result when masking. This happens to be the same as RD only if using an instruction format with too few register operands. This gives a maximum of 5 input dependencies. These are not required at the same stage in the pipeline. The execution stage has typically 2 or 3 input operands. A few unimportant optional instructions have 4 inputs. I added these instructions mainly for testing whether it is feasible to have four input operands.
The address calculation stage in the pipeline can have up to three input dependencies: base pointer, index and vector length (or block size). The architecture does not specify which stage in the pipeline reads the mask register. It could be somewhere between the address calculation stage and the execution stage if the number of register read ports is critical.

So, the total maximum number of input dependencies is five, but the maximum number of input dependencies at one stage in the pipeline can be limited to three if necessary. This is still a lot of complexity to handle for the scheduler, I agree. I have limited the number of output dependencies to one for the same reason. This limitation creates problems for add-with-carry and for integer overflow detection, but people have convinced me that it was more important to limit the number of output dependencies. x86 has two output dependencies on most instructions, including the flags and the floating point control/status registers.

Intel processors have traditionally had a limitation of two input dependencies for each micro-operation, but this has been increased to three in recent versions. AMD have never had any limitation on input dependencies. I don't know how expensive it is to have many read ports, but I prefer to have many read ports rather than splitting instructions into micro-operations. My idea is that a ForwardCom processor should not need to split instructions into micro-ops.
making the operation dependent on the former value of the architectural destination register defeats the purpose of register renaming: write-after-read dependencies are still there.
All superscalar processors handle this by renaming the register so that the input and output use two different physical registers.
A more acceptable implementation would break down each masked instruction into 2 µops: first a computation on the whole vector, then a merge (I believe that is how Knights Landing and Skylake handle masked AVX-512 instructions).
I have not been able to get my hands on any AVX-512 processor yet, so I am not able to test it. If you have any information on the microarchitecture of these processors then please give me a link.
More generally, masks encode control flow information, while vector registers encode data flow information.
The masking mechanism translates control flow into data flow. It does not change the dependency of the other input and output operands, as I explained above. If all elements of a vector are masked off then the hardware will still treat the input operand as a dependency and wait for it. You may want to convert this into a branch if the prediction rate is good.


Author: Sylvain Collange Date: 2016-08-15 14:05
Agner wrote:
A function may have an unlimited number of input parameters and an unlimited number of outputs (through pointers or references or class members or whatever, all of which could in principle have different masks. That would add a lot of complexity to the ABI for something that is rarely needed. I prefer to specify a mask as an explicit function parameter in the rare cases that it is needed.
I do not see in which situation a vectorized program would need to pass multiple masks to a function. If we vectorize a loop with a function call inside, for each iteration i, either the function is called (bit mask 1) or not called (bit mask 0). Masks originate from the control flow of the original program. There is only one control flow regardless of the number of function parameters. So a single mask register is sufficient.
So, the total maximum number of input dependencies is five, but the maximum number of input dependencies at one stage in the pipeline can be limited to three if necessary.
The pipeline stage does not make a difference. If you want to sustain an execution rate of three 5-input instructions per clock cycle, you need the bandwidth to read 15 inputs per cycle on average from the register file. It does not matter in which order and at which pipeline stage they are read.

I agree that limiting the number of outputs is even more important. (Register file area scales with the square of the number of ports and power scales with its cube. But for read ports, you can always follow the brute-force approach of duplicating the register file and get a linear scaling.)
Yes, x86 instructions also output flags, but x86 CPUs use tricks to turn them into single-output instructions by having slightly wider physical registers, storing the flags in the physical destination register, and renaming the architectural flags register to the physical destination of the last instruction issued.

But input ports are also a bottleneck. Specializing registers allow to partition this complexity. Most ISAs use different registers for integer and FP, for instance.
making the operation dependent on the former value of the architectural destination register defeats the purpose of register renaming: write-after-read dependencies are still there.
All superscalar processors handle this by renaming the register so that the input and output use two different physical registers.
Let's take an example:
Non-masked code

mul_add.d v0, v1, v3, v4
mul_add.d v0, v5, v6, v7

Renaming maps each v0 operand to different physical registers. Assuming all inputs are ready and we have 2 FMA units with latency of 5 cycles, both instructions run in parallel and the code takes 5 cycles. So far so good.

Masked code

mul_add.d v0, v1, v3, v4, mask=v8
mul_add.d v0, v5, v6, v7, mask=v9

Renaming still maps v0 to different physical registers p0 and p1, but the second FMA now has a data dependency on the older value of v0, which is p0. Assuming a by-the-book OoO scheduler, instructions will be considered as dependent and the result will be only available after 10 cycles.

Now with the 2 µop option, the code becomes after expansion and renaming:

mul_add.d p0, v1, v3, v4
merge.d p1, v0, p0, mask=v8
mul_add.d p2, v5, v6, v7
merge.d p3, p1, p2, mask=v9

Both FMAs can start immediately and produce their results after 5 cycles. Then the first merge executes in say, 1 cycle, then the second dependent merge. Total latency is 7 cycles.
In any case, renaming was unable to completely reorder the two instructions.

I agree you could have a very smart scheduler that schedules instructions just 5 cycles before the mask becomes available so it is ready just in time. But it would add a lot of complexity in a critical part of the pipeline. Better break down the work into enough simple µops and let the out-of-order engine do the heavy lifting. Then you can spend resources you just saved into wider issue width and higher clocks.
I have not been able to get my hands on any AVX-512 processor yet, so I am not able to test it. If you have any information on the microarchitecture of these processors then please give me a link.
At some point Intel had the Knights Landing software optimization guide on their website, but it seems they took it out. :(
About masks, they say:
AVX-512 permits instructions to use masking. The best performance usually happens for instructions that do not use masking. Instructions that mask with zeroing usually have similar performance. Instructions that mask with merging can occasionally perform much slower, since there is an additional dependency introduced to the instruction. Keep in mind performance considerations when selecting the masking mode.
Their formulation suggests they follow the single-instruction approach with naive scheduling. They do not appear to split in two µops after all. But if they write "much slower" they probably mean it...

The takeaway is that the masking mode (zeroing or merging) should be available as early as possible in the pipeline, ideally at decode just by looking at the instruction. And the programmer/compiler should use zeroing whenever possible.


Author: Agner Date: 2016-08-15 23:46
Sylvain Collange wrote:
Agner wrote:
So, the total maximum number of input dependencies is five, but the maximum number of input dependencies at one stage in the pipeline can be limited to three if necessary.
The pipeline stage does not make a difference. If you want to sustain an execution rate of three 5-input instructions per clock cycle, you need the bandwidth to read 15 inputs per cycle on average from the register file. It does not matter in which order and at which pipeline stage they are read.
Could you have two separate sets of regiser read ports, one that is wired to the address calculation stage in the pipeline (general purpose registers only), and one that is wired to the execution stage?
Yes, x86 instructions also output flags, but x86 CPUs use tricks to turn them into single-output instructions by having slightly wider physical registers, storing the flags in the physical destination register, and renaming the architectural flags register to the physical destination of the last instruction issued.
Interesting. I wonder where you have all this information from?
Masked code

mul_add.d v0, v1, v3, v4, mask=v8
mul_add.d v0, v5, v6, v7, mask=v9

Renaming still maps v0 to different physical registers p0 and p1, but the second FMA now has a data dependency on the older value of v0, which is p0.
No. Without zeroing, the mask chooses between the result and the first input operand, which is not the destination operand.

mul_add.d v0, v1, v3, v4, mask=v5


is the equivalent of:

Code: Select all

for (i=0; i < vectorlength; i++) 
    v0([i] = v5[i] ? v1[i] + v3[i]*v4[i]  :  v1[i];

AVX-512 permits instructions to use masking. The best performance usually happens for instructions that do not use masking. Instructions that mask with zeroing usually have similar performance. Instructions that mask with merging can occasionally perform much slower, since there is an additional dependency introduced to the instruction. Keep in mind performance considerations when selecting the masking mode.
Their formulation suggests they follow the single-instruction approach with naive scheduling. They do not appear to split in two µops after all. But if they write "much slower" they probably mean it...

The takeaway is that the masking mode (zeroing or merging) should be available as early as possible in the pipeline, ideally at decode just by looking at the instruction. And the programmer/compiler should use zeroing whenever possible.
You may as well interpret "much slower" as a consequence of the extra dependency which it has to wait for. I actually had the masking mode (zeroing or merging) coded in a bit in the instruction in an early draft of ForwardCom, but I needed this bit for other purposes and I was able to put it into the mask register because it doesn't change dependencies when merging with the first operand rather than with the destination.


Author: Agner Date: 2016-08-16 03:11
Actually, we could limit the maximum number of input dependencies to four if we make the rule that instructions with three input operands including a memory operand cannot have a mask. That would not be a serious limitation.


Author: Sylvain Collange Date: 2016-08-16 10:30
Correction: I meant WAW dependencies in my previous messages, not WAR dependencies.

Agner wrote:
Could you have two separate sets of regiser read ports, one that is wired to the address calculation stage in the pipeline (general purpose registers only), and one that is wired to the execution stage?
If both sets of ports access the same physical array of registers, then the RF component still needs m+n ports, regardless of where its ports are connected to.
If you replicate the array itself to make two copies of the register file, then yes, it works: you get twice as many read ports for about twice the area. It does not work for write ports, though: you still need to perform all writebacks to both register files to keep their data coherent. Then another complication is the bypass network, which does not scale well with the number of ports either.

You may want to take advantage of the fact that mask registers are a subset of general-purpose registers to replicate only part of the array. It would work on an in-order processor, but as soon as you rename registers this property does not hold any more.
Interesting. I wonder where you have all this information from?
Unfortunately, academic papers seldom go down to that level of detail, so you have to microbenchmark, speculate, and read patents.
For register renaming, here are the classic ones from the P6/K7 era:
https://www.google.com/patents/US6047369
https://www.google.com/patents/US5632023
Even on these 20 year-old architectures, you can see the renaming techniques are quite involved. (Thanks to the infamous INC/DEC instructions that only update a subset of the Flags register...)
No. Without zeroing, the mask chooses between the result and the first input operand, which is not the destination operand.
Now I get it, thanks! Indeed, you need no extra input since you will be reading this register anyway.
I was expecting the semantics of masked instructions to be "pretend nothing happens on lanes with a 0 bit mask". In your ISA even 0-masked lanes have the side effect of a register move. This is unusual, but why not... Wouldn't that complicate register allocation, though? Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code?

Also are there instruction encodings to chose between the result and the second operand (e.g. v0 = v5 ? v1 + v3 * v4 : v3; for a conditional Horner step) ?


Author: Agner Date: 2016-08-17 07:33
Sylvain Collange wrote:
Correction: I meant WAW dependencies in my previous messages, not WAR dependencies.
Agner wrote:
Could you have two separate sets of regiser read ports, one that is wired to the address calculation stage in the pipeline (general purpose registers only), and one that is wired to the execution stage?
If both sets of ports access the same physical array of registers, then the RF component still needs m+n ports, regardless of where its ports are connected to.
If you replicate the array itself to make two copies of the register file, then yes, it works.
I don't want to duplicate the register file, only avoid the quadratic increase you are talking about.
Then another complication is the bypass network, which does not scale well with the number of ports either.
Can you bypass a result to multiple in-flight instructions using the same bus? If so, you need only few bypass buses.
Interesting. I wonder where you have all this information from?
Unfortunately, academic papers seldom go down to that level of detail, so you have to microbenchmark, speculate, and read patents.
Do you have any references on how the register read ports are designed and why they are so expensive?
Even on these 20 year-old architectures, you can see the renaming techniques are quite involved. (Thanks to the infamous INC/DEC instructions that only update a subset of the Flags register...)
This is exactly the kind of cumbersome heritage that made me want to design a completely new instruction set. ForwardCom never modifies a part of a register only.
I was expecting the semantics of masked instructions to be "pretend nothing happens on lanes with a 0 bit mask".
That was never my plan. The mask value is not available at the time the register renamer assigns the output.
Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code?
A branch in a vectorized loop is translated to a mask. All the good compilers can do this today.
Also are there instruction encodings to chose between the result and the second operand (e.g. v0 = v5 ? v1 + v3 * v4 : v3; for a conditional Horner step) ?
That was not my plan, but of course such an instruction could be defined if there is a need for it. But why would you have a conditional inside a polynomial?
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Merging with first operand

Post by agner »

Author: Sylvain Collange Date: 2016-08-18 10:20
Agner wrote:
Can you bypass a result to multiple in-flight instructions using the same bus? If so, you need only few bypass buses.
Yes, that is essentially how it is done (or at least how it is described in textbooks). But the load on the bus increases with the number of potential consumers, and so do area, delay and power.
There are still a lot of tricks hardware designers can use on a given technology, but the general trends hold anyway. And as far as instruction set design is concerned, general trends are what we care about.
Do you have any references on how the register read ports are designed and why they are so expensive?
Rixner et al., Register Organization for Media Processing, HPCA 2000.
http://www.ai.mit.edu/projects/aries/pa ... gister.pdf
evaluates the tradeoffs using analytical models and considers various distributed and SIMD RF organizations.
Do you have code examples showing how a compiler can perform if-conversion from some arbitrary code?
A branch in a vectorized loop is translated to a mask. All the good compilers can do this today.
I was asking specifically in the context of ForwardCom and the choice of the first operand as the masked source.
e.g. Which code is generated when you vectorize :

Code: Select all

for all i {
  int x = a[i];
  int y = b[i];
  if(x > y) {
    y = x + 1;
  }
  c[i] = y;
  d[i] = x;
}
The translation in a classic predicated ISA is straighforward, but what about in ForwardCom?
I have the impression that the first operand of the masked instructions will end up being the same as the destination anyway, and that we need an extra register move.
That was not my plan, but of course such an instruction could be defined if there is a need for it. But why would you have a conditional inside a polynomial?
I do not know, but the question is what a compiler will emit when vectorizing such a program if the instruction does not exist? (And the compiler has to assume that arithmetic exceptions might be enabled.)


Author: Agner Date: 2016-08-19 12:00
Sylvain Collange wrote:
Rixner et al., Register Organization for Media Processing, HPCA 2000.
www.ai.mit.edu/projects/aries/papers/im ... gister.pdf
evaluates the tradeoffs using analytical models and considers various distributed and SIMD RF organizations.
Thanks for the reference
Which code is generated when you vectorize :

for all i {


This is straightforward: A compare instruction to generate the mask, an addition instruction, and a masked move instruction to conditionally replace y by x+1.
(And the compiler has to assume that arithmetic exceptions might be enabled.)
If you have enabled exceptions for integer overflow and you want to avoid the hypothetical overflow exception in a not-taken branch then you have to put the mask on the addition instuction as well. In other words, you will have two masked instructions using the same mask.

This is the reason why exceptions are not good in a vector processor. In floating point code you can disable exceptions and rely on the propagation of NAN and INF values to detect numeric errors. But there is no easy way of detecting integer overflow. I have discussed this in a previous post, but there seems to be no good solution. Many compilers and programming languages simply have no easy way of detecting integer overflow, but this may be a security problem, so I thought that we must provide some method. The following methods for detecting integer overflow may be considered:
  • Use a flag output. This gives every ALU instruction an extra output dependency, and also an extra input dependency if you want the overflow bit to be propagated through a series of calculations. The extra input and output dependencies hamper out-of-order processing.
  • Add an extra overflow bit for each vector element in a vector register. This adds an extra complication for saving the overflow bits when a vector register is spilled to memory.
  • Use exception traps (interrupts) to catch integer overflow. This requires a minimum of complexity to the instruction set, but it has a problem with vector code.
  • Use the even-numbered elements in a vector register for calculation and the odd-numbered elements for propagating overflow information. This is relatively easy to implement but wasteful because only half of the ALUs are used.
Method 1 has been rejected. The other three methods are still being considered for use in ForwardCom.


Author: Sylvain Collange Date: 2016-08-19 15:37
Agner wrote:
The following methods for detecting integer overflow may be considered:
  • Use a flag output. This gives every ALU instruction an extra output dependency, and also an extra input dependency if you want the overflow bit to be propagated through a series of calculations. The extra input and output dependencies hamper out-of-order processing.
  • Add an extra overflow bit for each vector element in a vector register. This adds an extra complication for saving the overflow bits when a vector register is spilled to memory.
  • Use exception traps (interrupts) to catch integer overflow. This requires a minimum of complexity to the instruction set, but it has a problem with vector code.
  • Use the even-numbered elements in a vector register for calculation and the odd-numbered elements for propagating overflow information. This is relatively easy to implement but wasteful because only half of the ALUs are used.
Method 1 has been rejected. The other three methods are still being considered for use in ForwardCom.
What about:
5. Same as 3, and use "traditional" masking on all SIMD instructions.
In traditional masking, an SIMD instruction working on vectors of n operands takes an n-bit mask. For each lane i: If bit i is set, the operation is performed normally on lane i (including triggering exceptions as needed). If bit i is unset, nothing happens on lane i (including silencing exceptions). If multiple exceptions happen in the same vector, you may either report them all at once by exposing an exception mask, or report them one by one in a well-defined order.

AVX-512 as well as AMD and Nvidia GPUs work that way, and so do most SIMD machines since the ILLIAC-IV, 45 years ago. The only exceptions were the MMX/SSE/AVX-style short-vector instruction sets, which were not "real" SIMD (at least not until AVX-512). There is no problem with exceptions in SIMD code, as long as per-lane masking is properly defined.

There is some overhead for handling such masking in an out-of-order processor as we discussed, but zeroing can avoid this overhead in many cases.


Author: Agner Date: 2016-08-20 00:21
Thank you for your inventiveness.
Sylvain Collange wrote:

What about:
5. Same as 3, and use "traditional" masking on all SIMD instructions.
In traditional masking, an SIMD instruction working on vectors of n operands takes an n-bit mask. For each lane i: If bit i is set, the operation is performed normally on lane i (including triggering exceptions as needed). If bit i is unset, nothing happens on lane i (including silencing exceptions). If multiple exceptions happen in the same vector, you may either report them all at once by exposing an exception mask, or report them one by one in a well-defined order.
I consider this as implied in (3) because the mask in ForwardCom suppresses all error conditions for disabled elements. Maybe I didn't express this clearly enough. It is still a problem that the behavior depends on the vector length. If a loop generates two exceptions then these exceptions may happen simultaneously if vectors are long, but separately if the same code is run on another processor with shorter vectors. It is difficult to construct the exception handler so that it is guaranteed to generate the same result regardless of the vector length of the microprocessor.

Hubert Lamontagne wrote:
I can think of a couple more options:

Method 5 : Have an extra instruction that applies the operation but generates potential-overflow-or-exception flags instead of result values. This is equivalent to option 1 but using 2 instructions instead of one (but is roughly the same cost in the pipeline since you need 2 micro-ops in both case to store all the results), or it's like option 4 but without having the two results interlaced together.
Then you have a problem if you want to propagate the overflow bit. You could either check the overflow after each instruction at the cost of N branch instruction in a calculation with N steps; Use OR instructions to join the overflow flags at the cost of N OR-instructions; or have an extra input for the previous flag on each overflow check instruction at the cost of having more input dependencies on the instructions and another dependency chain making out-of-order execution more difficult. Anyway, the throughput will be 1/3 of the throughput without overflow checking, where method (4) gives 1/2 throughput.
Method 6 : Have separate vector flags registers. This is like method 1 but instead of having the extra inputs and outputs go to the same register file (which requires more ports and thus more micro-ops), they go to a separate register file that only deals with flags which means that they can have their own register renamer and have their own ports, which avoids the N^2 growth associated with adding more ports.
This still requires an extra input dependency and an extra output dependency on each instruction. I think this would be quite costly for the out-of-order scheduler.
Method 7 : Do it all in software. This makes checked operations take multiple instructions which might be slower. But it reduces the overall complexity of the design which means that you might be able to run 1 or 2 more instructions per cycle, which probably offsets the extra cost of separate checking. This can probably be combined with option 5 (just have a couple extra instructions for speeding up overflow checking for the cases that really need it).
I am not sure I understand how differs from your method 5.
Very rough cost-benefit analysis:
Yes, this shows that all the methods are costly for the hardware as well as for the compiler. I don't want to split instructions into multiple micro-operations.

Perhaps the choice of method depends on the purpose of overflow detection:
  • If the purpose is to abort the application in case of overflow for security reasons then method (3) might be the easiest solution.
  • If the purpose is to recover from an overflow condition by re-calculating with bigger integers then method (3) might also be acceptable if overflows are rare.
  • If the purpose is to detect the carry output of unsigned addition then we may perhaps prefer method (4).
  • If the purpose is some kind of advanced compensation for signed overflow then I don't know what method is best.

Author: Hubert Lamontagne Date: 2016-08-20 18:49
Agner wrote:
Perhaps the choice of method depends on the purpose of overflow detection:
  • If the purpose is to abort the application in case of overflow for security reasons then method (3) might be the easiest solution.

This is why the add instruction triggers a fault on... IBM Z-system I *THINK*. MIPS has a separate specialized instruction for this as well. It's the same reason for 0-divide exceptions, it's used in applications where aborting failing applications is a good thing in general (like server software talking to a database where the earlier you shut it down, the less damage you inflict...). This is not too popular among end-user facing applications like video games and stuff like photoshop, because in that case an application shutdown is already the worst thing that can happen.

Agner wrote:
If the purpose is to recover from an overflow condition by re-calculating with bigger integers then method (3) might also be acceptable if overflows are rare.
If I'm not mistaken, that's how LISP did it. The catch with this one is that it requires dynamic typing to work. If you're running a language with dynamic typing, the chances that you're going to be able to run any vector code at all are low. Even for scalar code, dynamic typed languages typically have some other much worse bottlenecks than integer computation (multiple levels of indirection to get to data in Python, for instance). For something like C++ code, IMHO it's just way easier and faster to supersize all your integers to 64bits from the start than to deal with any sort of size change.

Agner wrote:
If the purpose is to detect the carry output of unsigned addition then we may perhaps prefer method (4).
This happens in one very precise case: BIGNUM computation. How important is BIGNUM for you? My impression on this is that BIGNUM falls kinda low on priority lists for architectures like ARM. If you design your architecture for BIGNUM, you can probably make it much more efficient for this kind of stuff than a generic architecture like MIPS. I guess it comes down to what you're targetting.

Agner wrote:
If the purpose is some kind of advanced compensation for signed overflow then I don't know what method is best.
My experience on that is that if you're running into signed overflow problems, the easiest way is simply to switch to floating point or supersize your integers to 64bits.


Author: Sylvain Collange Date: 2016-08-25 07:43
Agner wrote:
I consider this as implied in (3) because the mask in ForwardCom suppresses all error conditions for disabled elements. Maybe I didn't express this clearly enough. It is still a problem that the behavior depends on the vector length. If a loop generates two exceptions then these exceptions may happen simultaneously if vectors are long, but separately if the same code is run on another processor with shorter vectors. It is difficult to construct the exception handler so that it is guaranteed to generate the same result regardless of the vector length of the microprocessor.
I see. I agree that there are scenarios in which it is important to signal exceptions in the same order as they would occur in the original sequential program. e.g. when speculatively vectorizing a while loop with a data-dependent exit condition.
There are still solutions that are not /that/ difficult:

Option 1. In the exception handler, find out which is the first lane that produced an exception, named i (I assume the ISA gives this information somehow).
- If i=0, report the exception.
- Otherwise, remember the exception, and change the vector length for the current iteration to i-1 (either using VL or a mask register).
At the end of each iteration, check whether there is a pending exception. If there is one, report it. Otherwise, just reset the vector length and continue.

Option 2. Same approach, using hardware instead of software. The "pending exception" becomes a flag register, with 1 flag per lane. The current vector length or mask is automatically reduced as some lanes record exceptions. A new 'commit' instruction is called at the end of each iteration. It raises the recorded exception that would be the first in sequential order.

In practice, I do not expect this to be often necessary, as a speculative vectorizer will need to restore a snapshot to undo all unwanted side effects and restart the loop whenever an exception occurs anyway, regardless on which lane it occurs.


Author: Hubert Lamontagne Date: 2016-08-19 17:52
Agner wrote:
If you have enabled exceptions for integer overflow and you want to avoid the hypothetical overflow exception in a not-taken branch then you have to put the mask on the addition instuction as well. In other words, you will have two masked instructions using the same mask.

This is the reason why exceptions are not good in a vector processor. In floating point code you can disable exceptions and rely on the propagation of NAN and INF values to detect numeric errors. But there is no easy way of detecting integer overflow. I have discussed this in a previous post, but there seems to be no good solution. Many compilers and programming languages simply have no easy way of detecting integer overflow, but this may be a security problem, so I thought that we must provide some method. The following methods for detecting integer overflow may be considered:
  • Use a flag output. This gives every ALU instruction an extra output dependency, and also an extra input dependency if you want the overflow bit to be propagated through a series of calculations. The extra input and output dependencies hamper out-of-order processing.
  • Add an extra overflow bit for each vector element in a vector register. This adds an extra complication for saving the overflow bits when a vector register is spilled to memory.
  • Use exception traps (interrupts) to catch integer overflow. This requires a minimum of complexity to the instruction set, but it has a problem with vector code.
  • Use the even-numbered elements in a vector register for calculation and the odd-numbered elements for propagating overflow information. This is relatively easy to implement but wasteful because only half of the ALUs are used.
Method 1 has been rejected. The other three methods are still being considered for use in ForwardCom.
I can think of a couple more options:

Method 5 : Have an extra instruction that applies the operation but generates potential-overflow-or-exception flags instead of result values. This is equivalent to option 1 but using 2 instructions instead of one (but is roughly the same cost in the pipeline since you need 2 micro-ops in both case to store all the results), or it's like option 4 but without having the two results interlaced together.

Method 6 : Have separate vector flags registers. This is like method 1 but instead of having the extra inputs and outputs go to the same register file (which requires more ports and thus more micro-ops), they go to a separate register file that only deals with flags which means that they can have their own register renamer and have their own ports, which avoids the N^2 growth associated with adding more ports. Plus, you can mandate that the whole flags register is always updated to avoid x86 flags register merging insanity. And you can probably have a limit of 1 op per cycle on the number of flags operations to limit the size of the extra flags manipulation block you'd be adding to the CPU (see how ARM did this: most arithmetic ops never touch the flags so it needs much less aggressive handling of flags). If you're doing heavy duty flags register operations, this might be worth the trouble - which is why 8bit and 16bit CPUs all have flags (they need it to deal with oversized integers).

Method 7 : Do it all in software. This makes checked operations take multiple instructions which might be slower. But it reduces the overall complexity of the design which means that you might be able to run 1 or 2 more instructions per cycle, which probably offsets the extra cost of separate checking. This can probably be combined with option 5 (just have a couple extra instructions for speeding up overflow checking for the cases that really need it).

Very rough cost-benefit analysis:
  • Requires 2 micro-ops. Note that this is fine if you have to support 2 micro-op operations for other reasons.
  • Effects depends on the ABI. If you mandate that callees don't have to save the flags and can trash them, then you only need special save/restore in interrupt handlers.
  • Depends on if the design is in-order or out-of-order. For in-order, this makes an Arm NEON style delayed-SIMD pipeline impossible because all downstream instructions become conditional. On out-of-order, this adds an extra path from the SIMD unit to the retirement unit to signal potential interrupts (plus more complex conditional writeback of other operations retiring on the same cycle).
  • Potentially adds more shuffling of values around to add and remove space for flags. Plus, you're potentially getting half as much vector processing for a given register size.
  • Adds more ALU instructions to deal with. Added instructions are kinda quirky.
  • Requires addition of full vector flags handling unit, some instructions to load/store flag reg values, potential stalls if too many flags instructions are issued at the same time.
  • Error checked code that isn't limited by memory or branch prediction might run slower, but non-error-checked code might run faster. Simpler architecture is easier to implement and verify.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Proposal for instruction set - now on Github

Post by agner »

Author: Joe Duarte Date: 2016-08-17 23:46
Agner wrote:
Joe Duarte wrote:
I think it would be interesting if applications could set a few custom constants at start time. A while back I asked about having useful constants hardwired in the CPU (e.g. π), and you guys explained why it probably wouldn't be worth it. I think letting each application tell the CPU to save a few constants that the application will often use might be more useful. The point is to be able to invoke the constant without an immediate, but just with a much smaller shortcode (two or three bits). It's kind of like an application-specific cache
ForwardCom can have constants embedded in instructions. This includes integer constants of all sizes and floating point constants of half, single and optionally double precision (requires 1, 2 and 3 word instruction size, respectively). Your idea would require an extra register set for saving and restoring data that will be needed later. This extra register set would just require two extra instructions for copying data from a normal register to an extra register and vice versa. This could be useful for many purposes and it would reduce the pressure on the data cache. The only problem I can see is that it will be an extra complication to the function calling convention because we need to know whether a function modifies the extra registers or not.
Agner, by the way, I just learned that my idea for small cache for constants (preferably in the form of special registers, or at least an L1 speed cache) has been widely implemented for years in Nvidia GPUs. It's called "constant memory", and reserves 64 KiB. See:

http://cuda-programming.blogspot.com/20 ... -cuda.html

http://stackoverflow.com/questions/1802 ... -practices

I was thinking of something smaller, like 256 bits in a reserved register for data of any type, and something like 2 or 4 KiB of stable application-specific L1-equivalent cache. The right figures should be empirically determined by careful research.

In fact, I don't think ISAs should settle on 32-bit and 64-bit types and registers unless those turn out to be optimal sizes, which I think is unlikely. As long as we can take care of alignment issues, I would determine the right type and register sizes empirically, based on what we know about common workloads and applications, as well as foreseeable future workloads. I think it's likely that 20-, 40-, and 80-bit types and registers will be more performant than 32/64. (I have no idea what the optimal vector register sizes would be, but Iike 320-bit. 64-bit is such a waste when used for memory addresses and pointers. On mobile and desktop, we should be fine with a 1 terabyte address space, which 40 bits give us.)


Author: Agner Date: 2016-08-18 09:01
Joe Duarte wrote:
Agner, by the way, I just learned that my idea for small cache for constants (preferably in the form of special registers, or at least an L1 speed cache) has been widely implemented for years in Nvidia GPUs. It's called "constant memory", and reserves 64 KiB.
ForwardCom can have read-only memory areas, but they are not guraranteed to be cached. I assume that you want a small memory area that is guaranteed to always be cached. That might actually give a speed gain since memory access is the most important bottleneck in many applications.

The Itanium did not have success with "explicit parallelism" because it was too difficult to handle for the compiler. Maybe "explicit caching" will be easier to handle. You might want to specify explicit caching for the top of the stack in the innermost loop or for often-used constants, jump tables or global variables.

There is a problem with task switching, though. The entire contents of the on-chip extra cache has to be swapped to main memory on each task switch. If you want fast task switching, you might want to have multiple cache banks with one for each process. That would be cumbersome.

Another possibility would be to reserve the extra cache for a particularly critical process. This process would monopolize the extra cache in one or more CPU cores while less important processes would be prevented from accessing it. That's an extra complication for the operating system to keep track of, but it is a quite common scenario that one process is speed-critical while all other processes are unimportant background processes.

Another possibility, which is perhaps simpler to handle, is an extra set of registers which are used for extra storage. Assume, for example, that we supplement the vector registers v0 - v31 by an extra set of vector registers w0 - w31. The extra registers can perhaps do nothing but read and write, e.g.

w8 = v5;
v2 = w8;

These extra registers can be used for often-used constants, for global variables, and for temporarily storing a normal register rather than saving it on the stack.

With an extra set of registers you have the complication of calling conventions. ForwardCom has a mechanism for telling the compiler which registers are modified by e.g. a library function. This mechanism could be extended to the extra registers. A library function could save a callee-save register to an extra register rather than saving it on the stack and reserve this extra register by indicating in the object file which extra registers it is using. The extra registers still have to be saved at every task switch if they are used.
In fact, I don't think ISAs should settle on 32-bit and 64-bit types and registers unless those turn out to be optimal sizes, which I think is unlikely. As long as we can take care of alignment issues, I would determine the right type and register sizes empirically, based on what we know about common workloads and applications, as well as foreseeable future workloads.
The alignment problem is serious. The hardware would have to support misaligned memory accesses as well as vectors of odd-size elements, which would be quite expensive for the hardware design and for performance. There is also a standardization issue.


Author: Joe Duarte Date: 2016-08-31 06:56
Agner wrote:
The Itanium did not have success with "explicit parallelism" because it was too difficult to handle for the compiler. Maybe "explicit caching" will be easier to handle. You might want to specify explicit caching for the top of the stack in the innermost loop or for often-used constants, jump tables or global variables.

Another possibility would be to reserve the extra cache for a particularly critical process. This process would monopolize the extra cache in one or more CPU cores while less important processes would be prevented from accessing it. That's an extra complication for the operating system to keep track of, but it is a quite common scenario that one process is speed-critical while all other processes are unimportant background processes.

Another possibility, which is perhaps simpler to handle, is an extra set of registers which are used for extra storage. Assume, for example, that we supplement the vector registers v0 - v31 by an extra set of vector registers w0 - w31. The extra registers can perhaps do nothing but read and write, e.g.

w8 = v5;
v2 = w8;

These extra registers can be used for often-used constants, for global variables, and for temporarily storing a normal register rather than saving it on the stack.

With an extra set of registers you have the complication of calling conventions. ForwardCom has a mechanism for telling the compiler which registers are modified by e.g. a library function. This mechanism could be extended to the extra registers. A library function could save a callee-save register to an extra register rather than saving it on the stack and reserve this extra register by indicating in the object file which extra registers it is using. The extra registers still have to be saved at every task switch if they are used.
I like your idea of the extra registers that do nothing but read/write. The key of what I want is stability and perhaps determinism in being able to cache application-specific data, constants, masks, etc. The CPU cache right now is a jungle.

I'm thinking about other ways to improve code density and decode efficiency. 1) Turn time or sequence into information. For example, maybe some instructions are only allowed at the start of a process. After a certain milestone, some instruction codes switch to represent different instructions – instructions that cannot be used at the start of a process, only after the milestone. This allows you to encode more instructions in the same 32-bit space. Are there any instructions that will never be used at the beginning of a program/process? 2) More efficient data types, beyond integers and floats. For example, I wonder if we could use a powers-of-two type, such that the literal binary value would represent an exponent of two, e.g. 10001101 would mean 2^141. Something like that would help with encoding some large numbers, though powers of two might not be the right form. Relatedly, instead of hard-coding a single constant like π the way I was thinking several months ago, there could be a type that represented a set of constants – maybe four or five bits to encode 16 to 32 predesignated constants that were empirically determined to recur frequently enough in a wide range of applications to be worth encoding. This would basically be a predefined static dictionary. It could be stuff like 32-bit pi, 64-bit pi, etc. or maybe no version of pi would make the cut. I have no idea. Do you think there are a set of a dozen or more constants that would be useful to put in a dictionary?

And I wonder about simpler, more specialized cores. Right now you've got all these powerful AVX, AVX2, BMI, FMA, TSX instructions and all the legacy x86 and 386 instructions supported on every single core. Maybe it would make sense to have a couple of Turbo cores or something that were SIMD-only, or optimized for heavy compute or HPC workloads. There's been some research on how to design a processor tailored for OpenCL, for example. Maybe ForwardCom could have a simpler, faster subset of the ISA designed for highly parallelized applications using OpenMP, OpenCL, or something like Threaded Building Blocks. It would be like Big/Little, but more specifically aimed at high performance parallel code in the little cores. A new ISA doesn't have the backwards compatibility constraints Intel is limited by, where they throw the kitchen sink into every core.


Author: Agner Date: 2016-08-31 11:08
Joe Duarte wrote:
For example, maybe some instructions are only allowed at the start of a process. After a certain milestone, some instruction codes switch to represent different instructions – instructions that cannot be used at the start of a process, only after the milestone.
The Opcode numbers are not so crowded that this would be worth the extra complexity, but some code spaces are reserved for application-specific or user-defined instructions.
I wonder if we could use a powers-of-two type, such that the literal binary value would represent an exponent of two, e.g. 10001101 would mean 2^141. Something like that would help with encoding some large numbers, though powers of two might not be the right form.
ForwardCom includes formats that code constants as a << b = a · 2b, but only for immediate constants, not for variables. It also supports half precision floating point constants - but not variables.
Relatedly, instead of hard-coding a single constant like π the way I was thinking several months ago, there could be a type that represented a set of constants – maybe four or five bits to encode 16 to 32 predesignated constants that were empirically determined to recur frequently enough in a wide range of applications to be worth encoding.
If we have an extra set of registers, these could be used for constants that are reused in a program.


Author: Jorcy Neto Date: 2016-09-01 12:16
Joe Duarte wrote:
And I wonder about simpler, more specialized cores. Right now you've got all these powerful AVX, AVX2, BMI, FMA, TSX instructions and all the legacy x86 and 386 instructions supported on every single core. Maybe it would make sense to have a couple of Turbo cores or something that were SIMD-only, or optimized for heavy compute or HPC workloads. There's been some research on how to design a processor tailored for OpenCL, for example. Maybe ForwardCom could have a simpler, faster subset of the ISA designed for highly parallelized applications using OpenMP, OpenCL, or something like Threaded Building Blocks. It would be like Big/Little, but more specifically aimed at high performance parallel code in the little cores. A new ISA doesn't have the backwards compatibility constraints Intel is limited by, where they throw the kitchen sink into every core.
Indeed, the 1st-gen Xeon Phi (a.k.a. Knights Corner) was basically a 64-bit extended P54C core (pre-MMX Pentium) with a custom (sorta like AVX-512F ) 512-bit Vector ISA. And it was actually marketed as a non MMX/SSE/AVX compatible product.

As the current gen Xeon Phi (Kngiths Landing) has adopted AVX-512 but also the whole MMX/SSE/AVX sets, an "AVX-512 only" core would surely be concievable and even desirable on a hybrid desing (like the Big/Small scheme). But I guess Intel wouldn't like to maket another non-backwards-compatible CPU.


Author: Yuhong Bao Date: 2016-07-12 03:35
It is probably not difficult to create a new x86 version that is user mode compatible with most modern programs but lacks things like segmentation (including the GDT/LDT) and real mode. New OS versions would be required, but most modern user mode programs would work with few if any modifications. Anyone want to do a proposal for that?


Author: Hubert Lamontagne Date: 2016-07-12 09:11
Yuhong Bao wrote:
It is probably not difficult to create a new x86 version that is user mode compatible with most modern programs but lacks things like segmentation (including the GDT/LDT) and real mode. New OS versions would be required, but most modern user mode programs would work with few if any modifications. Anyone want to do a proposal for that?
Afaik, this is one of the things designed into UEFI (a pathway to removing 16bit/real mode and segmentation by booting straight into 32bit or 64bit mode).
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Things from MIPS (and novel things)

Post by agner »

Author: Anonymous Date: 2016-07-28 07:10
Agner, have you studied the MIPS architecture? What do you think of "branch delay slots" ? Personally, I like them, because of the following:

There are things the software developer (and the compiler) easily knows, but the hardware developer will hardly know it. The instruction set should give the software developer means to express that knowledge to the hardware.

One thing software developers know is which procedure will be called on a function pointer (because of polymorphism, indirect branches have become quite popular), but the hardware couldn't possible know it. When indirect branch instructions like "CALL (register)", "JR (register)" or "JALR (register)" are executed, the hardware is absolutely clueless as to where to move forward, because it needs to know the value of the register before proceding (which it likely won't know. Because of the modern pipelined approach, it needs the write-back value of the register). This will very likely lead to a stall.

But, with the instructions "SETCALL (register)" and "CALL (no arguments)", "SETJUMP (register)" and "JUMP (no arguments)", it's possible to execute instructions while the processor is fetching instructions from the target address, like this:

SETCALL (register) ; sets the call target, the processors starts to fetch data from it.
instruction one...
instruction two...
instruction n...
CALL (no arguments) ; calls the procedure pointed by the call target.

The delay slot gives the processor time to prepare for the call or jump, and relieves the branch predictor of work. The branch predictor might be completely removed from the processor with this approach (more silicon for the hardware developer!). I have never seen any processor with instructions like that (and I have studied the 68000, x86, x64, ARM and MIPS). "if" statements could also benefit from this approach.

Delay slots are not novel. The novel part here is its size can be arbitrarily set.

I like the statements on the section "10.8 Function libraries and link methods". I wonder if they are there because of the indirect branch problem.

Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order:

CALL A; CALL B; CALL C; CALL D; CALL E;

But only the data for the E function is ready (is on the cache). Would the processor stall, while waiting for the data of A, B, C, D arrive? And, even worse, wait for the completion of A, B, C and D?

I like that vectors are present on this new instruction set and are an important part of it. Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing. I'm currently trying to write a fast 2048-bit modular multiplier (for exponentiation with 2048-bit exponents, which should happen under tens of milliseconds, and currently is taking hundreds of milliseconds), and, although Toom-Cook algorithm uses matrix multiplication, it's simply not possible to express it nicely with the instructions given by x86 or x64.


Author: Agner Date: 2016-07-28 12:16
Anonymous wrote:
What do you think of "branch delay slots" ?
I would categorize this under the label of explicit parallelism. You are explicitly specifying what the processor should do while fetching a jump target. I think there is a problem with letting the compiler decide what to do in parallel rather than leaving this decision to the CPU. The compiler will have to know which things the processor can or cannot do in parallel and how many instructions it can execute in the time it takes to fetch the jump target. That will lock you into a particular hardware implementation. If you later develop a new processor with different timings or a different degree of parallelism then you cannot run optimally on the new processor unless you compile again for this particular processor.

We have seen this problem with the Itanium where instructions were put together in blocks of three. The decision of which three instuctions to execute in parallel was left to the compiler rather than the CPU. It was very difficult to make such a compiler, and you were pretty much locked into a fixed degree of parallelism. I think it is better to have a genuine superscalar design with full out-of-order capabilities. Then you can run the same binary code on different CPUs with different amounts of parallelism, and the compiler doesn't need to know the details of the CPU.
One thing software developers know is which procedure will be called on a function pointer (because of polymorphism, indirect branches have become quite popular), but the hardware couldn't possible know it.
In a typical application, the value of the function pointer will be available long before the values of the function parameters so that the call instruction can be executed out of order. The CPU will execute the call instruction and fetch and decode the instructions inside the function while it is waiting for the function parameters.
Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order:

CALL A; CALL B; CALL C; CALL D; CALL E;

But only the data for the E function is ready (is on the cache). Would the processor stall, while waiting for the data of A, B, C, D arrive? And, even worse, wait for the completion of A, B, C and D?
No. Assuming that there are no code cache misses, but only data cache misses, a superscalar CPU will fecth and decode everything in advance, if the out-of-order buffer is big enough, while waiting for data. If the data for E are available before the data for the preceding functions, then E will be executed first.
I like that vectors are present on this new instruction set and are an important part of it. Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing.
Matrix multiplication often involves a lot of data permutation and horizontal addition which takes extra time.


Author: Hubert Lamontagne Date: 2016-07-28 13:11
Anonymous wrote:
What do you think of "branch delay slots" ?
My opinion on branch delay slots is that they're great for a certain size of CPUs: large enough to be 32 bits and maybe have a code cache (so that it's not limited by instruction memory bandwidth), but small enough that it doesn't have a branch predictor. MIPS is a perfect example of this, and tons of MIPS have been used in stuff like Playstations where they're the ideal size. Not to mention the whole "fast microcontroller" market - which is right now dominated by ARMs (which don't have a branch delay slot but would be the right kind of CPU for it). Afaik, Nvidia's Denver architecture (in-order VLIW, very modern) has branch delay slots.

Once your CPU is large enough to have a branch predictor, the speed benefit shrinks a lot. And when your CPU becomes out-of-order, then the branch delay slot becomes a bit of a liability since it increases the complexity of the corner cases, and out-of-order is already forcing to have a rollback mechanism anyways so why not use it to run instructions speculatively past branches? And when your pipeline becomes longer, this means you still have stalls unless your branch delay slot is very long: for instance, if your whole issue stage is 5 cycles, and your CPU runs 3 instructions per cycle, you need a branch delay slot of 15 instructions... kinda a lot, no?

That being said, your proposal is interesting, and for a semi-ordered CPU (something in between out-of-order and in-order) would probably be a good idea, and is not too risky because even if it turns out to not be a good idea, it can still probably be implemented fairly easily in an out-of-order CPU. But I don't think I've ever seen any good examples of semi-ordered architectures - once you go past a certain complexity, it becomes simpler to implement the Tomasulo algorithm and register renaming and go full out-of-order.

Anonymous wrote:
Cache miss is also a big topic. Suppose I have five functions (functions in the mathematical sense), A, B, C, D, E, which can be fully executed in parallel. The calls instructions are laid in the following order: CALL A; CALL B; CALL C; CALL D; CALL E; But only the data for the E function is ready (is on the cache). Would the processor stall, while waiting for the data of A, B, C, D arrive? And, even worse, wait for the completion of A, B, C and D?
This is what out-of-order CPUs do! Provided that you don't exceed the limits on how much instructions the CPU can reorder and what situations it can deal with, they're designed to run at least the parts of functions further up ahead that generate memory addresses, which lets you deal with a lot more memory latency.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Matrix multiplication

Post by agner »

Author: Agner Date: 2016-07-29 07:51
Anonymous wrote:
Many algorithms use matrix multiplication, which can greatly benefit from instruction-level parallelism, but the current instructions sets are simply disappointing [...] it's simply not possible to express it nicely with the instructions given by x86 or x64.
I wonder if matrix multiplication is so important that it would be worthwhile to implement special instructions that facilitate matrix multiplication. A complete matrix multiply instruction would be far too complex, but instructions for doing the necessary permutations would be realistic.

Assume that we have a NxM matrix A and a MxP matrix B and we want to calculate the NxP product AB. The matrixes are stored in row-major order. We can calculate the matrix product in the following way. Copy the first column of A into P identical columns to make a NxP matrix with all columns identical. Copy the first row of B into N identical rows to make a NxP matrix with all rows identical. Multiply the two NxP matrixes, element by element. Do the same with the second column of A and the second row of B, and so on. Now we have M matrixes of size NxP. The result is the sum of these M matrixes.

If we have a universal permute instruction then we can copy the rows or columns without problems. However, the complexity of a full permute instruction grows with the vector length squared so this may be too expensive for very long vectors. Instead, we could have a repeat-block instruction which repeats the first sub-vector into a long vector, and a repeat-within-block instruction that copies the first element of each subvector into all elements of their subvector block. These instructions can be used for repeating the rows and columns, respectively. If the vector register is not long enough to hold the entire matrix, then split the matrix into multiple registers, each holding an integral number of rows.

This will still be somewhat complicated to implement, but much cheaper than a universal permute instruction.


Author: Hubert Lamontagne Date: 2016-07-29 09:33
ARM's simd has instructions that can help matrix multiply and are also useful for a whole bunch of other simd tasks:

VMUL, VMLA, VMLS with broadcasted operand - Vector multiply by broadcasted scalar extracted from a vector register (+accumulating and subtracting versions)
http://infocenter.arm.com/help/index.js ... JDBDJ.html

VTRN - Matrix transpose
http://infocenter.arm.com/help/index.js ... DJAEA.html

VPADD - Pairwise add
http://infocenter.arm.com/help/topic/co ... GDEGD.html

VZIP, VUZP - Interleave/deinterleave vectors
http://infocenter.arm.com/help/topic/co ... CACAB.html

VDUP - Broadcast scalar extracted from a vector register to whole vector register
http://infocenter.arm.com/help/topic/co ... DIGCC.html


Author: John D. McCalpin Date: 2016-07-29 10:03
There are two problems with matrix multiplication in recent architectures:
1. The biggest problem with matrix multiplication comes from the need to keep many independent accumulators to tolerate pipeline latency. For example, on Intel's Haswell (Xeon E5 v3) processor with two FMA units and a 5-cycle FMA latency, a minimum of 10 accumulators are required. Each of these accumulators must be a full-width SIMD vector in order to use all the FMA hardware. It turns out that 10 accumulators is not a useful number for register blocking of matrix multiplication (5x2 and 2x5 don't provide good re-use), so the optimum algorithms must use 12 accumulators -- leaving only 4 register names for everything else. The optimal blocking uses 3 registers for values that are re-used in non-adjacent code, leaving a single register for handling all of the load-with-broadcast values that are used for the input array that requires transposition.
2. A secondary problem is the absence of transposition support. When going from AVX2 (described above) to AVX-512, the number of accumulators required remains the same, but the wider SIMD registers means that the code must implement 8 load-with-broadcast operations (for the transposed input array) in the same number of cycles that were required for 4 load-with-broadcast operations on Haswell. The only way that this can happen is to use a larger blocking factor for register blocking so that more data re-use occurs and more cycles are available with a free load port. Since AVX2 already required all 16 registers, an AVX-512 implementation clearly requires more -- so it is not surprising that the AVX-512 instruction set provides 32 named registers.

The first problem is probably the easiest to solve. A floating-point multiply-add unit with a single-cycle multiply-accumulate mode is described by S. Jain et al., in “A 90mW/GFlop 3.4GHz reconfigurable fused/continuous multiply-accumulator for floating-point and integer operands in 65nm,” VLSID ’10., 2010. This would reduce the number of accumulators to 2 (one per FMA unit) and (in the AVX2 example) would free 10 registers for holding data for re-use.

The second problem is definitely more difficult to solve. Although it would be relatively easy to build an FMA unit with a built-in broadcast of a scalar element, this would not help with getting the scalar element(s) into the register(s) in the first place. In the standard form of matrix multiplication, the values that need to be broadcast are collected from strided memory addresses (with a stride equal to the dimension of the matrix). Given that most (all?) recent processors have "wide" (multiple 64-bit word) interfaces at all levels of the cache/memory hierarchy, so there is physically no way to get full bandwidth while extracting only one element per block.

A possible route forward would be a SIMD "transpose with broadcast" instruction, taking a single vector of P contiguous elements and broadcasting each element across one of P output SIMD registers. The overall matrix multiplication algorithm would have to be organized so that all of these P elements would be used before being discarded, which restricts the space of possible orderings considerably. I have not worked through the details to see if this leads to a workable solution. A non-SIMD approach based on independent single-lane vector units operating from a scratchpad memory (rather than SIMD registers) might be preferred, but that opens up too many options to think about. Some encouraging developments in high-radix on-chip crossbar technology that would help in any of these approaches to parallelism is reported by Sudhir Satpathy, et al., in a number of publications, including
http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf


Author: Agner Date: 2016-07-29 11:03
John D. McCalpin wrote:
A possible route forward would be a SIMD "transpose with broadcast" instruction, taking a single vector of P contiguous elements and broadcasting each element across one of P output SIMD registers.
Hubert argued that multiple outputs is a bad thing for an out-of-order CPU. Consequently, I made the ForwardCom design so that it allows only one output register for each instruction. But it allows very long vector registers. I am thinking about the situation where the vector register is long enough to contain the entire matrix, or at least several rows of it. What I proposed is the same as you described, except that it is all contained in a single vector register.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Introduction website

Post by agner »

Author: Agner Date: 2016-08-01 01:48
I have made an introduction to the ForwardCom project at http://www.forwardcom.info.

I have added a few optional instructions to facilitate matrix multiplication.


Author: EricTL Date: 2017-07-17 18:15
Great proposal so far with a lot of interesting ideas.
May I suggest a couple of instructions:
-Bit reverse: Reverse all the bits of a register.
This is very useful to enable bit hack for the most significant bits.

-LUT base logical ops: Rd, Rs Imm: Rd OPS Rs -> Rd Where OPS is a 8 bits LUTs implementing any three input one output logical expression.
4 more bits of the immediate field will specify msb, lsb and the following 3 cases:
OPS(Rd, msb:Rd >> 1, Rs) -> Rd
OPS(Rd, Rd:lsb << 1, Rs) -> Rd
OPS(Rd:lsb << 1, msb:Rd >> 1, Rs) -> Rd
That instruction could replace all the other logical instructions. And, Or, and-not, ... and adds some new one.

-MultiCarryOps:
Rd ops Rs under Rc (Carry mask): Ops can be +, - >>, <<, rot The carry mask work as follow
Each bit set in the Rc indicate the msb of a sub bit string. i.e. the carry does not propagate to the left bit of a set bit in the carry mask.
The idea is to enable efficient bitfield ops.
A compressed version could use force a bit field to be at least two bits and offer Rd ops Rs where Rs 'or' together the carry mask and a single bit after each Cm bit set.
e.g. Cm = 0x0420: 0x7C1F + 0x0631 => 0x8020; 0x7C1F - 0x0631 => 0x77FE; 0x7C00 - 0x0230 => 0x7FE0
These instructions will enable a better use of bit-fields and enable more complex loop count in one instructions. Also if made atomic they could enable new lock free algorithms.


Author: Agner Date: 2017-07-18 00:28
EricTL wrote:
May I suggest a couple of instructions:
-Bit reverse: Reverse all the bits of a register.
This instruction exists. It is called bit_reverse.
LUT base logical ops:
Instruction truth_tab2 is a two-input LUT.
Instruction truth_tab3 is a three-input LUT.

These instructions implement arbitrary functions for boolean vectors. However, the function is applied only to the least significant bit of each vector element. There are two reasons for this limitation. (1) This is the way boolean vectors are represented in ForwardCom. The boolean value is stored in the least significant bit. The remaining bits are option bits, which can be propagated to the result and used for specifying options to e.g. a floating point instruction using the LUT result as a mask. (2) The 3-input LUT can be implemented in hardware by using an 8-bit barrel shifter where the 8 bits of the LUT are shifted right by the count defined by the three input bits. The hardware allready has an 8-bit barrel shifter for each 8-bit vector element so that the implementation is cheap. A LUT function that applies to all bits of a vector will require one 8-bit barrel shifter for each vector bit, which would be more expensive in terms of silicon space.

I am not sure I understand what you want to do with the msb, lsb inputs?
MultiCarryOps:
Rd ops Rs under Rc (Carry mask): Ops can be +, - >>, <<, rot The carry mask work as follow
Each bit set in the Rc indicate the msb of a sub bit string. i.e. the carry does not propagate to the left bit of a set bit in the carry mask.
Interesting idea. But this has no high level language equivalent. It will only be used by programmers who master the use of intrinsic functions or assembly language. A simpler solution is to just leave one unused bit between each bitfield to avoid carry from one bitfield to the next.

BTW, I have made the move_bits instruction for efficient insertion and extraction of bit fields.


Author: EricTL Date: 2017-07-20 19:19
I thought I replied already. Maybe I did not sent it.

Agner wrote:
Instruction truth_tab2 is a two-input LUT.
Instruction truth_tab3 is a three-input LUT.

These instructions implement arbitrary functions for boolean vectors. However, the function is applied only to the least significant bit of each vector element. There are two reasons for this limitation. (1) This is the way boolean vectors are represented in ForwardCom. The boolean value is stored in the least significant bit. The remaining bits are option bits, which can be propagated to the result and used for specifying options to e.g. a floating point instruction using the LUT result as a mask. (2) The 3-input LUT can be implemented in hardware by using an 8-bit barrel shifter where the 8 bits of the LUT are shifted right by the count defined by the three input bits. The hardware allready has an 8-bit barrel shifter for each 8-bit vector element so that the implementation is cheap. A LUT function that applies to all bits of a vector will require one 8-bit barrel shifter for each vector bit, which would be more expensive in terms of silicon space.

I am not sure I understand what you want to do with the msb, lsb inputs?
Let's put aside the shit left or right by one (the immediate lsb and msb where there to feed the shift on both end of the register. )

A 4 bit LUT can be made with 3 multiplexers (mux.) per bit.

I contend the logical part of the ALU will need at least 3 mux. per bit to choose from A | B, A & B, A & ~B and A ^ B.
So in essence for logical encoded instruction the 2 mux. control bits comme from the instruction while for the LUT base the two control bits come from the registers.

In the simplest implementation the 4 logical ops will be done first then the mux4:1 will choose which one.

In the LUT case while the registers are being read the immediate 4 bits for the LUT are amplified and broadcasted to the mux4:1.

In modern CMOS on SOI a mux2:1 is just 2 transistors. So if one counts two inversors (2 Tr x 2 ) for every 8 bits => 4 x 4 x (64 / 8) = 128 Tr for the broadcast
plus 64 x 6 Tr for the mux4:1

Total 512 Tr for the universal solutions vs a minimum of 64 x (6+1) for the mux4:1 part plus the cost of the four logical ops...
Granted it is over simplified but I guess anyone can get the just of it.

Moreover with the LUT solution one get also Not, Clear, Id (move) plus any other less common logical expression
and in a two registers instruction settings ( Rd = Rd ops Rs ) the LUT enable (Rd := Rd ops Rs) and (Rd := Rs ops Rd).

The case for 8 bits LUT (universal three-input logical operation) is less clear. The mux is now 8:1 => 14 Tr and the broadcast doubled to 256 Tr => 1152 Tr.
I still think that instruction can be leveraged by a compiler to, at the very least, make the code smaller.

To finish on a rather hackish and somewhat silly note In the case of the tiny instruction format the 4 bits LUT can be implemented by using the code for Rs as both the register and the LUT. In that form Clear can be implemented as LUT R0 therefore reclaiming the opcode.

Not opcode can be reclaimed as well by LUT R5 (LUT convention b3,b2,b1,b0 -> 0101)
Freeing a opcode for something else. A clever compiler should be able to play with the limitation somewhat and get some further optimization.


Author: Agner Date: 2017-07-20 23:27
I don't like the idea of using the same bitfield as both instruction and register. You will not have a free choice of what register to use, and the out-of-order scheduler will have problems detecting whether it should wait for the value of that register or not.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Proposal for instruction set - now on Github

Post by agner »

Author: Joe Duarte Date: 2016-08-04 01:23
Agner, there are a couple of specialized instruction domains that you seem to be pushing to FPGAs or other coprocessors, but I think ForwardCom will suffer from their exclusion.

Encryption may be specialized, but it's not optional anymore. Fast encryption is going to be a central requirement of any computing device for decades. So the lack of AES and similar instructions is a disadvantage. You could say that FPGAs can do it, but computers today are expected to be able to do this without extra silicon. FPGAs are not in anyone's Bill of Materials for a smartphone, PC, or server (well, in rare cases for servers), and there's still a lot of doubt over whether there are enough programmers who know how to fully leverage FPGAs.

Parsing text and bytecode is a huge part of what computers do every day, so I think a new ISA should take a clean look at how to speed up parsing (and JIT'ing). You don't have any counterpart to the SSE 4.2 string comparisons, and I wouldn't want to punt that to FPGAs either. I know that programs rarely use those SSE 4.2 instructions today, but that sort of thing will become more common in the future as there is more pressure on energy use in cloud computing and battery life on mobile. At some point someone is going to build a clean-sheet web server that fits entirely in the L2/L3 cache.

Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?


Author: Agner Date: 2016-08-04 14:37
Joe Duarte wrote:
Fast encryption is going to be a central requirement of any computing device for decades. So the lack of AES and similar instructions is a disadvantage.


You are probably right that AES instructions would be useful.
Parsing text and bytecode is a huge part of what computers do every day, so I think a new ISA should take a clean look at how to speed up parsing (and JIT'ing). You don't have any counterpart to the SSE 4.2 string comparisons, and I wouldn't want to punt that to FPGAs either.
Text strings are rarely so long that it really matters. It will take less time to parse a string than to read it from an SSD anyway.

x86 AES and SSE4.2 instructions are still only 128 bits while most other vector instructions are 256 - and soon 512 - bits. This is because they are costly to implement. In most cases, it will be faster to parse a string using simple compare instructions with the maximum vector length than a SSE4.2 instruction limited to 128 bits.
Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?
Instruction fetch and decoding is a bottleneck on all Intel processors. They are putting a micro-operation cache after the decoder to fix this, but each micro-op uses many bits so physical distances on the silicon chip is a limitation. The ForwardCom instruction format will be more efficient than this because it can decode fast and it is compact. x86 has a lot of suboptimal designs because it needs to preserve backwards compatibility with a long legacy of short-sighted designs and patches. They even keep supporting undocumented opcodes from the old 8086.

It is difficult to predict how much you can gain from having longer vectors. Some applications will be unable to use long vectors. The special vector loop will be an advantage whenever the loop count is not certain to be a multiple of the vector size.


Author: Hubert Lamontagne Date: 2016-08-05 12:25
Joe Duarte wrote:
Agner, there are a couple of specialized instruction domains that you seem to be pushing to FPGAs or other coprocessors, but I think ForwardCom will suffer from their exclusion.

Encryption may be specialized, but it's not optional anymore. Fast encryption is going to be a central requirement of any computing device for decades. So the lack of AES and similar instructions is a disadvantage. You could say that FPGAs can do it, but computers today are expected to be able to do this without extra silicon. FPGAs are not in anyone's Bill of Materials for a smartphone, PC, or server (well, in rare cases for servers), and there's still a lot of doubt over whether there are enough programmers who know how to fully leverage FPGAs.
What about a specialized AES encryption device, accessed using memory mapped I/O like other peripherals like DMA controllers, timers etc? Wouldn't that do the job, and keep the overall complexity lower? The OS would probably have to make sure no 2 programs are using the AES device at the same time but otherwise it would be pretty straightforwards: set the key, push the data through. Afaik that's close to how ARM does it.

Joe Duarte wrote:
Parsing text and bytecode is a huge part of what computers do every day, so I think a new ISA should take a clean look at how to speed up parsing (and JIT'ing). You don't have any counterpart to the SSE 4.2 string comparisons, and I wouldn't want to punt that to FPGAs either. I know that programs rarely use those SSE 4.2 instructions today, but that sort of thing will become more common in the future as there is more pressure on energy use in cloud computing and battery life on mobile. At some point someone is going to build a clean-sheet web server that fits entirely in the L2/L3 cache.
Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature. Afaik, there are only 2 approaches for this at the moment:

- If your load is to run a whole bunch of programs at the same time (= it's a server), you can use the UltraSparc design: a really simple in-order CPU that uses tons of hyper-threading, and every time something stalls just switch to the next thread. The simple in-order core takes a lot less space so you can cram a whole bunch of them on the same chip, which lets you compensate for speed deficiencies.

- Otherwise, just use a general purpose out-of-order CPU that's as fast as possible. x86 fits this profile perfectly (which is why it's so popular still) and ARM is evolving towards that too. This is part of why RISC is popular: starting with a classic 4-issue out-of-order CPU like the Dec Alpha 21264, it's very hard to find new instructions to add that will actually speed it up at text processing and not be overly specific, and afaik the stuff that speed it up the most are just making branch prediction and data cache as fast as possible (look at x86 evolution for a perfect example of this). Truth is, if your text processing program has the kind of stuff that doesn't run fast on x86, chances are, it's doing stuff like defeating the branch predictor or data caching, and it's probably not going to run fast on ANY hardware. In fact, there are even good chances that it's not even filling the 4-instructions per cycle limit that modern Pentium-2 descendant cores can do.

Bytecode interpreters fall into a similar category of load-store-jump code and are hard to speed up for similar reasons. A way to put it is that while you can use parallelism to increase the number of calculations per second, you can't use it to increase the number of decisions per second, which tends to be limited by electrical constraints (clock rate, number of cycles of latency on the data cache, number of cycles of latency on issue - which can be reduced with a simpler instruction encoding but not past a certain limit).

I guess there's the special exception of the CPU designed to use dynamic translation - like the Transmeta Crusoe or NVidia's Denver (which in theory the Nintendo NX could use). The catch is that even then, these cores tend to perform good on loopy-mathy code, but not very well on loady-story-jumpy code (which is why the Denver performs better than ARM cores on benchmarks but worse in real world applications), and bytecode tends to be used for loady-story-jumpy code. It also doesn't help that high level programming languages tend to have all sorts of features that make the language inherently slow (one example that comes to mind is Python's data structures that I think use pointers to pointers just for getting to an object's data) and hard to dynamically compile (because they allow dynamic typing hacks, which makes it hard to protect the interpreter against corner cases).

Joe Duarte wrote:
Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?
That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core. My hunch is that it would run at about the same speed as x86, but simply require less design time to implement all the crazy instruction unpacking and 286-era segmenting intricacies. I don't think ForwardCom would scale very well past 4 instructions per cycle, but to be fair I don't think any instruction set does (they tried to do it with Itanium and failed).

Imho most of the ideas and innovation and potential speed gain in ForwardCom is in the vector instructions. This means that it has good potential for numeric-heavy applications (video games! physics!), if you use vectors, which generally means you have to program in C++ and use intrinsics (or use assembly). A few C++ compilers can auto-vectorize (Intel's compiler, MSVC 2012 and later and GCC with all speed options turned on) but it has to be "trivial loops" - if any of the memory accesses alias any other memory access in the loop, or there's any sort of carried state, then it's impossible to vectorize and the compiler will issue normal instructions (ie scalar integers and floats). Basically the compiler has to prove that each loop iteration is 100% independent from other iterations, which generally only happens in numeric-heavy code looping over a buffer and applying exactly the same mathematical operation on each iteration and data coming in neat linear arrays.

For higher level languages (anything dynamic typed like javascript or pyton etc), those are not designed for vectorization in any way so it would probably run 100% scalar. For middle level languages (Java, C#), those in theory are vectorizable but only in the same cases that C++ is vectorizable (trivial loops doing numerical computation), and generally people use C++ (video games) or Fortran (scientific computing) for those applications so there's not really any point AFAIK.


Author: Agner Date: 2016-08-06 02:44
Hubert Lamontagne wrote:
What about a specialized AES encryption device, accessed using memory mapped I/O like other peripherals like DMA controllers, timers etc? Wouldn't that do the job, and keep the overall complexity lower? The OS would probably have to make sure no 2 programs are using the AES device at the same time but otherwise it would be pretty straightforwards: set the key, push the data through. Afaik that's close to how ARM does it.
That's a pretty expensive solution. I wouldn't recommend a clone of the x86 AES instructions either, because they have long latencies which is bad for the smooth pipeline structure of ForwardCom. I would prefer to split up the AES operations into small unit operations with single-clock latency. If the length of vector registers is at least 2048 bits and we have full byte permute instructions then you can have a complete 256 byte lookup table contained in one vector register and do the table lookup with a byte permute instruction. The whole AES algorithm could then be implemented with general permute and XOR instructions without any special-purpose instructions. This would be a flexible solution that could be adapted to other encryption algorithms as well. The price is a large permutation circuit.
Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature.
The branch misprediction penalty is equal to the length of the pipeline. The best way to speed up unpredictable branches is to have a short pipeline or to speculatively execute both sides of a 2-way branch simultaneously. Some text processing jobs can be vectorized, using masked operations etc. UTF-8 conversion can be done with the compress_sparse and expand_sparse instructions in ForwardCom (sorry, they are poorly described in the manual).
Bytecode interpreters fall into a similar category of load-store-jump code and are hard to speed up for similar reasons. A way to put it is that while you can use parallelism to increase the number of calculations per second, you can't use it to increase the number of decisions per second, which tends to be limited by electrical constraints
ForwardCom has a multiway branch instruction which is useful for byte code interpreters, but I think the way to speed up byte code is JIT complation.

Joe Duarte wrote:
Have you been able to guess at performance improvements from the new ISA? Holding everything else equal, the foundry technology, etc. would ForwardCom be faster or more efficient than a comparable Broadwell-E? I'm thinking of your new loops, for example, and other features. What sense do you have of the performance difference?
Hubert Lamontagne wrote:
That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core.
The target of ForwardCom is indeed to beat the performance/price ratio of existing systems. It is designed with a simple and consistent pipeline structure. It is doing one thing taking one clock cycle at each pipeline stage, except for multiplication, division, and a few other necessary things.

We should not forget, however, that in many applications the bottleneck is not the CPU but cache, memory access, disk access, or network speed.


Author: fanoI Date: 2016-08-08 06:00
Hello Agner,

what about an hardware accelerated Decimal Floating Point Unit as present in IBM Power CPUs? In particular for business applications the decimal type is necessary but now being done in software is 100x slower that float / double!


Author: Agner Date: 2016-08-08 10:26
fanoI wrote:
what about an hardware accelerated Decimal Floating Point Unit
The x86 instruction set has instructions for integer decimal arithmetics. However, these instructions are removed in the 64-bit mode because they are never used.

The normal procedure is to convert decimal to binary, do calculations in binary, and convert the result to decimal. These conversions can be vectorized to speed up the process. You can find code examples for this in my vector class library. Anyway, decimal numbers are for human use and the bottleneck will always be the human, not the computer :-)


Author: fanoI Date: 2016-08-09 13:51
Agner wrote:
fanoI wrote:
what about an hardware accelerated Decimal Floating Point Unit
The x86 instruction set has instructions for integer decimal arithmetics. However, these instructions are removed in the 64-bit mode because they are never used.

The normal procedure is to convert decimal to binary, do calculations in binary, and convert the result to decimal. These conversions can be vectorized to speed up the process. You can find code examples for this in my vector class library. Anyway, decimal numbers are for human use and the bottleneck will always be the human, not the computer :-)
I think the instructions you are referring were for the old BCD mode I was referring to the actual Decimal Floating Point:
https://en.wikipedia.org/wiki/Decimal_floating_point

they exist in C as GCC extensions (_Decimal32, _Decimal64 and _Decimal128), in C++ as decimal32, decimal64 and decimal128 and in C# there is the Decimal type. To do calculation with these types you need to resort to do them in software in x86 CPU (in Power CPU IBM has added a Decimal Floating Point Unit do this in hardware) they are used for financial calculations where powers of 10s are more important that power of 2s :-)


Author: Joe Duarte Date: 2016-08-08 18:41
Hubert Lamontagne wrote:
Text parsing is hard to speed up. It tends to be made up of loops full of loads, stores, and conditional branches, and not very many of the kind of mathematical operations that can be sped up. It's inherently serial in nature. Afaik, there are only 2 approaches for this at the moment:

- If your load is to run a whole bunch of programs at the same time (= it's a server), you can use the UltraSparc design: a really simple in-order CPU that uses tons of hyper-threading, and every time something stalls just switch to the next thread. The simple in-order core takes a lot less space so you can cram a whole bunch of them on the same chip, which lets you compensate for speed deficiencies.

- Otherwise, just use a general purpose out-of-order CPU that's as fast as possible. x86 fits this profile perfectly (which is why it's so popular still) and ARM is evolving towards that too. This is part of why RISC is popular: starting with a classic 4-issue out-of-order CPU like the Dec Alpha 21264, it's very hard to find new instructions to add that will actually speed it up at text processing and not be overly specific, and afaik the stuff that speed it up the most are just making branch prediction and data cache as fast as possible (look at x86 evolution for a perfect example of this).
I'm thinking of approaches Parabix: http://parabix.costar.sfu.ca/
One of their papers: https://www.cs.sfu.ca/~ashriram/publica ... arabix.pdf

I guess it just comes down to vector instructions and their parallel bitstreams approach. (Another way to boost it would be multiplexed streams, but that's a different plane of the architecture than the ISA.)

Hubert wrote:
That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core. My hunch is that it would run at about the same speed as x86, but simply require less design time to implement all the crazy instruction unpacking and 286-era segmenting intricacies. I don't think ForwardCom would scale very well past 4 instructions per cycle, but to be fair I don't think any instruction set does (they tried to do it with Itanium and failed).
Well, Broadwell-E is a high target right now, but the target is moving. It's going to be a low target in 2020. Of course Agner's not necessarily thinking in terms of marketable silicon, but more toward a useful reference and engineering exercise. I'm just wondering what kind of performance wins are possible. Another way of framing it: If a well-funded team rolled up their sleeves and built a new general purpose CPU architecture from scratch, including a new ISA and tape out and all that, using TSMC or Samsung's 16/14 nm process, could they beat Intel/Skylake? Maybe Intel is wringing all the performance out of current processes that the physics allows, and the legacy technical debt is only a design cost overhead. They seem to be treading water since Haswell, so I sometimes wonder if a better architecture would make a difference or if the wall is more fundamental than Intel.

By the way, do you think Itanium was just too early, with too little tooling? Does an out of order CISC have any inherent advantage over VLIW or EPIC? It seems goofy to send one instruction at a time. I haven't found any deep-dive post-mortems on Itanium, just vague claims that no one could build a good compiler for it, or that the physical product just wasn't that fast. I think we're going to see some big advances in compiler technology in the near future, with powerful platforms like IBM Watson doing things a laptop-bound compiler can't do.

By the way, regarding the idea of specialized AES encryption device, Intel offered QuickAssist on the v2 and v3 Xeons: http://www.intel.com/content/www/us/en/ ... rview.html

They don't seem to offer it on the Broadwells. It's a dedicated accelerator for encryption and compression, but I've never found a detailed review anywhere. Do either of you know more about it? I'm surprised it didn't get more attention.


Author: Hubert Lamontagne Date: 2016-08-09 17:49
Joe Duarte wrote:
I'm thinking of approaches Parabix: http://parabix.costar.sfu.ca/
One of their papers: https://www.cs.sfu.ca/~ashriram/publica ... arabix.pdf

I guess it just comes down to vector instructions and their parallel bitstreams approach. (Another way to boost it would be multiplexed
streams, but that's a different plane of the architecture than the ISA.)

Hmm, that's an interesting technique. Very reminiscent of how they used to do graphics on the Amiga and in EGA (which worked well for 2d but had to be abandoned once they started doing 3d).

Joe Duarte wrote:

Hubert wrote:
That's a pretty high target! To get anything close to this performance, you'd need to run ForwardCom at about 4 instructions per cycle, and considering the complexity of ForwardCom instructions, this would require a pretty large and complex CPU core. My hunch is that it would run at about the same speed as x86, but simply require less design time to implement all the crazy instruction unpacking and 286-era segmenting intricacies. I don't think ForwardCom would scale very well past 4 instructions per cycle, but to be fair I don't think any instruction set does (they tried to do it with Itanium and failed).
Well, Broadwell-E is a high target right now, but the target is moving. It's going to be a low target in 2020. Of course Agner's not necessarily thinking in terms of marketable silicon, but more toward a useful reference and engineering exercise. I'm just wondering what kind of performance wins are possible. Another way of framing it: If a well-funded team rolled up their sleeves and built a new general purpose CPU architecture from scratch, including a new ISA and tape out and all that, using TSMC or Samsung's 16/14 nm process, could they beat Intel/Skylake? Maybe Intel is wringing all the performance out of current processes that the physics allows, and the legacy technical debt is only a design cost overhead. They seem to be treading water since Haswell, so I sometimes wonder if a better architecture would make a difference or if the wall is more fundamental than Intel.
That's a good question. I think that's exactly what ARM is trying to do! In fact, there are a whole bunch of ARM licensees that are taking a stab at this, and imho it's proving to be a little bit harder than expected...

The catch is that once you have a cpu as big as a Broadwell, it has tons of stuff in it: in particular, a very complex memory architecture with a whole laundry list of features: aggressively pipelined multibank data cache with 2 read ports and 1 write port, TLB, out-of-order non-blocking everything with a memory ordering buffer, prefetching, multi-core cache coherency protocol, handling of misaligned addresses... This is basically almost totally independent of the architecture - you could implement all the same stuff on an ARM or MIPS or POWER core and it would be just as complex.

If you factor in the other stuff like aggressively pipelined secondary units like the FPU and SIMD unit, heavy branch prediction, hyper threading... this is all stuff that's basically the same on other architectures. The only thing that x86 changes is the front end. The front end part of a CPU isn't THAT large and complex compared to all the rest on modern cores with so many features.

And if, say, some new silicon process changes the distribution of bottlenecks so that faster instruction decoding becomes more important again, then Intel can simply put in a larger or broader micro-code cache, or maybe even go back to the instruction trace cache like on the Pentium 4. (ok, the P4 is a bit reviled, but the truth is that it still held on pretty good compared to the other well designed architectures of the time)


Joe Duarte wrote:
By the way, do you think Itanium was just too early, with too little tooling? Does an out of order CISC have any inherent advantage over VLIW or EPIC? It seems goofy to send one instruction at a time. I haven't found any deep-dive post-mortems on Itanium, just vague claims that no one could build a good compiler for it, or that the physical product just wasn't that fast. I think we're going to see some big advances in compiler technology in the near future, with powerful platforms like IBM Watson doing things a laptop-bound compiler can't do.
I think it boils down to a couple things... The first generation Itanium came out late. Because of this, RISCs and x86s of the day had an extra generation to catch up, which means that the first generation Itanium totally failed to impress. Second generation came out pretty fast and sorta fixed this, but Itanium could never quite reach up to the speed of the fastest RISCs and x86s. One guy said that it was never as fast as HP's previous architecture (PA-RISC, totally a classic RISC).

Second thing, Itanium was basically betting that out-of-order would fail to deliver enough speed gains to keep the RISCs and x86s relevant. It turns out that out-of-order delivers speed gains at just the right place: memory accesses. Real life programs have lots of memory accesses going on, which gives out-of-order a fairly hard-to-beat edge. It's just easier to deal with memory latency with an out-of-order pipeline than with the Itanium's insane ALAT thing with checked speculative loads and stores and whatnot. Late Itaniums can issue 12 instructions per cycle, which x86 will probably never be able to do, but even though x86 only issues 4 instructions per cycle, 3 of those can have a built-in memory access which can be crazy-reordered by the aggressive out-of-order pipeline, and it turns out that that's enough to smoke the Itanium.

There's a question on Quora about exactly this and it has tons of interesting answers:
Why did the Intel Itanium microprocessors fail? https://www.quora.com/Why-did-the-Intel ... ssors-fail


Author: Joe Duarte Date: 2016-08-11 19:09
Hubert Lamontagne wrote:
The catch is that once you have a cpu as big as a Broadwell, it has tons of stuff in it: in particular, a very complex memory architecture with a whole laundry list of features: aggressively pipelined multibank data cache with 2 read ports and 1 write port, TLB, out-of-order non-blocking everything with a memory ordering buffer, prefetching, multi-core cache coherency protocol, handling of misaligned addresses... This is basically almost totally independent of the architecture - you could implement all the same stuff on an ARM or MIPS or POWER core and it would be just as complex.

If you factor in the other stuff like aggressively pipelined secondary units like the FPU and SIMD unit, heavy branch prediction, hyper threading... this is all stuff that's basically the same on other architectures. The only thing that x86 changes is the front end. The front end part of a CPU isn't THAT large and complex compared to all the rest on modern cores with so many features.

And if, say, some new silicon process changes the distribution of bottlenecks so that faster instruction decoding becomes more important again, then Intel can simply put in a larger or broader micro-code cache, or maybe even go back to the instruction trace cache like on the Pentium 4. (ok, the P4 is a bit reviled, but the truth is that it still held on pretty good compared to the other well designed architectures of the time)
By the way, this point and your other point on dynamic translation, Transmeta, etc. remind me of the recent buzz around Soft Machines. I don't think I've seen any discussions of Soft Machines here. In particular, I was reminded of Ian Cutress saying in this article that Soft Machines was more than ten years behind Intel on speculation and branch prediction: http://www.anandtech.com/show/10025/exa ... e-visc-ipc

That really struck me as illustrating how much accrued work and research has gone into a modern Intel CPU, and that an ISA might be a relatively minor piece.

While I have you, what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up.


Author: Agner Date: 2016-08-12 00:11
Joe Duarte wrote:
what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up.
My idea is not to use microcode in ForwardCom at all (and no microcode cache, of course).
Intel and AMD use microcode only for complicated instructions that use many µops, as microcode is quite inefficient. There are thousands of different instruction codes in x86 and most of them are implemented directly in silicon. That must be a lot of silicon! I am sure that this is slowing things down. Some of these instructions were added mainly for marketing reasons, providing no real advantage. Many instructions are obsolete, but they keep supporting them for backward compatibility.


Author: Hubert Lamontagne Date: 2016-08-12 18:13
Joe Duarte wrote:
what goes into implementing new instructions? What's the physical manifestation of an instruction? Is there one, or is it typically just microcode? I'm always amazed by how more instructions constantly added, and that they consistently speed things up.
Afaik, generally, there are '4 levels of implementation' for chips:

- FPGA: You write your logic in Verilog or VHDL, compile it to the FPGA, then run it.

- FPGA to ASIC conversion: This is basically an ASIC with a bunch of FPGA-like cells, but where the metal interconnection layer is customized. This has much lower fixed costs than an ASIC, but there's only a speed gain of about 30% and a cost gain of about 50% over FPGAs.

- ASIC: The Verilog / VHDL code is compiled to a proper circuit layout. Most chips use this. This makes much faster chips and has a much lower unit cost, but the fixed costs are very high ($1million+, need lots of units). There needs to be lots of verification.

- ASIC with additional custom-layout parts: Same as above but some parts are custom-laid out rather than automatically. AFAIK Intel does this for the most speed-critical parts of its CPUs (like the L1 Data Cache) and it is part of its advantage over AMD (I've read somewhere that AMD has stopped doing custom layout).

More info: http://electronics.stackexchange.com/qu ... -asic-made

Within the chip, instructions that have the same memory access pattern and latency often are interpreted more or less the same, with the ALU doing the different possible arithmetic operations in parallel and then having a multiplexer at the end that selects which result it uses. In Verilog, it looks something like this:

case (alu_op)
0: alu_out <= b;
1: alu_out <= a + b;
2: alu_out <= a - b;
3: alu_out <= a & b;
4: alu_out <= a | b;
5: alu_out <= a ^ b;
6: alu_out <= a >> b;
7: alu_out <= a << b;
8: alu_out <= a >>> b;
9: alu_out <= a >= b ? 1 : 0;
10: alu_out <= ($signed(a) >= $signed(b)) ? 1 : 0;
default: alu_out <= alu_out; endcase

Agner wrote:
My idea is not to use microcode in ForwardCom at all (and no microcode cache, of course).
Intel and AMD use microcode only for complicated instructions that use many µops, as microcode is quite inefficient. There are thousands of different instruction codes in x86 and most of them are implemented directly in silicon. That must be a lot of silicon! I am sure that this is slowing things down. Some of these instructions were added mainly for marketing reasons, providing no real advantage. Many instructions are obsolete, but they keep supporting them for backward compatibility.
Yeah, no microcode is a nice goal to strive for. Though that requires a certain amount of "instruction set discipline" : no long latency or multi-step or multi-µop operations, or operations that load/store an excessive number of registers or do multiple memory accesses, or special-case exceptions, or instructions that can have all sorts of strange side effects deep in the pipeline and have to be specially executed in-order (like CPU mode manipulation using the flag registers for instance)...


Author: grant galitz Date: 2016-08-22 22:53
Pardon my slight skimming of the contents of this thread.

I was wondering if adding the ability for the amount of registers visible to be user programmable, like THUMB on ARM, was overviewed prior to forwardcom's writing. I could see special branch instructions upgrading or downgrading the number of registers to fine tune the instruction caching/fetching vs work done per instruction as a still useful tool. I'm also curious as to the implication this would have on context switching overhead, and any hardware logic issues.


Author: Agner Date: 2016-08-22 23:59
grant galitz wrote:
I was wondering if adding the ability for the amount of registers visible to be user programmable, like THUMB on ARM, was overviewed prior to forwardcom's writing.
The instruction code template has 5 bits for each register operand. This gives 25 = 32 registers.

Adding more registers would make the instructions bigger. Fewer registers would be wasteful. If the number of registers is variable then all the software, including function libraries and system code has to be recompiled every time the number of registers is changed to optimally use the number of available registers. I think it would be too complicated to reconfigure register space as cache or something else. This would require a lot of extra wiring between otherwise unrelated parts of the chip.


Author: Hubert Lamontagne Date: 2016-08-24 00:33
grant galitz wrote:
Pardon my slight skimming of the contents of this thread.

I was wondering if adding the ability for the amount of registers visible to be user programmable, like THUMB on ARM, was overviewed prior to forwardcom's writing. I could see special branch instructions upgrading or downgrading the number of registers to fine tune the instruction caching/fetching vs work done per instruction as a still useful tool. I'm also curious as to the implication this would have on context switching overhead, and any hardware logic issues.
One reason ARM has a 16bit instruction mode (THUMB) is that not all ARM cores have an instruction cache. If you're loading instructions from RAM or slow ROM (sometimes with a 16bit bus), you really really want instructions to be as small as possible because it's so costly to load them up. THUMB mode was often used on the GBA when running code from the slower RAM or ROM, which had a 16bit bus. This is also why the SuperH has 16bit instructions as well.

Having an explicit switch between ARM and THUMB mode simplifies the pipeline because it removes the need to have a prefetch buffer to handle unaligned instructions. Later on, they added THUMB-2 mode which removed this limitation and would let the program mix up 16-bit and 32-bit instructions.

The catch with small instructions is that 16-bit THUMB code runs significantly slower than ARM on modern cores that have instruction caches and large, 64bit buses, since so many instructions need to be broken up. And also, THUMB-2 code doesn't run faster than ARM in most applications, in spite of taking less instruction cache. One guy comparing the linux kernel size and the Coremark benchmark has:

vmlinux-ARM32: text 8.4mb, data 463kb, bss 827kb, total 9.7mb
vmlinux-thumb2: text 6.7mb, data 463kb, bss 827kb, total 8.0mb

The best is -O3 -funroll-loops -marm -march=armv5te -mtune=cortex-a8
The best armv7-a is -O3 -funroll-loops -marm -march=armv7-a -mtune=cortex-a8 at 95.2 % of overall best
The best Thumb-2 is -O3 -funroll-loops -mthumb -march=armv7-a -mtune=cortex-a8 at 88.7% of overall best
The best Thumb-1 is -O2 -mthumb -march=armv5te -mtune=cortex-a8 at 64.4% of overall best

(Source)

So being able to lower the number of registers below 32 to shrink opcodes on a CPU design that's targetting large machines with lots of ram doesn't seem too relevant to me...

Now, increasing the number of registers is another question and I'm curious about what people have to say about that.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

ARM with scalable vector extensions

Post by agner »

Author: Agner Date: 2016-08-23 00:24
Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length:
ARM Announces ARM v8-A with Scalable Vector Extensions http://www.anandtech.com/show/10586/arm ... ata-center

The details are not available yet, but it appears that the vector length must be a multiple of 128 bits. Maybe it must be a power of 2:
Technology Update: The Scalable Vector Extension (SVE) for the ARMv8-A architecture https://community.arm.com/groups/proces ... chitecture

This new ARM extension has the same idea as ForwardCom that the same code should be able to run on different processors with different vector length. But the hardware seems much more complicated than ForwardCom. The ARM extension requires that the hardware scheduler is able to split long vectors into shorter ones or merge short vectors into longer ones. The compiler would also be quite complicated, I think. The special ForwardCom loop type is much more elegant and efficient with no limit to vector length, see forwardcom.info.


Author: Jorcy Neto Date: 2016-08-23 06:11
Agner wrote:
But the hardware seems much more complicated than ForwardCom. The ARM extension requires that the hardware scheduler is able to split long vectors into shorter ones or merge short vectors into longer ones. The compiler would also be quite complicated, I think. The special ForwardCom loop type is much more elegant and efficient with no limit to vector length, see forwardcom.info.
Whereas splitting wider vector/SIMD instructions into narrower execution units is quite straightforward (just like early SSE/SSE2 in PIII/P4), merging narrower vectors for wider units really seems way more complex.


Author: Hubert Lamontagne Date: 2016-08-26 11:21
Agner wrote:
Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length:
ARM Announces ARM v8-A with Scalable Vector Extensions.


The details are not available yet, but it appears that the vector length must be a multiple of 128 bits. Maybe it must be a power of 2:
Technology Update: The Scalable Vector Extension (SVE) for the ARMv8-A architecture.


This new ARM extension has the same idea as ForwardCom that the same code should be able to run on different processors with different vector length. But the hardware seems much more complicated than ForwardCom.
Well, the ARM version is also somewhat more aggressive since it tries to deal with scatter-gather, carried loop dependencies (aka feedback) and data-dependent loop exits.

Agner wrote:
The ARM extension requires that the hardware scheduler is able to split long vectors into shorter ones or merge short vectors into longer ones.
Hmm, I'm re-reading the article, and now I'm really wondering what they actually mean about how it adapts to different vector sizes. Either:
- It's a software scheme (in which the software knows the vector size, and if you have, say, 512 bit vectors, then the top 1536 bits always read as 0) and the article is slightly wrong in the way it presents things.
- Or, vectors can always be 2048 bits and it simply uses more micro-ops on narrower processors (this is already how NEON works) - this relatively straightforwards to implement, although this leaves the issue of having to deal with oversized register files (I'm not up to date on how costly this is in hardware).
- Or maybe it's some other crazier scheme? I can't quite tell.

Agner wrote:
The compiler would also be quite complicated, I think.
Auto-vectorization is always complicated!

I'm not sure exactly why the ARM version is more complicated than ForwardCom though. Yes, the ARM version requires a more complex calculation to get vector length and turn it into lane flags, although there's a good chance this extra calculation can run in parallel with other stuff and essentially be free. The ARM version doesn't require transforming the loop index 0..n-1 into a negative index from end of buffer, which I'm sure is doable in SSA form but might get complex for loops with lots of references to the indexing variable.

Agner wrote:
The special ForwardCom loop type is much more elegant and efficient with no limit to vector length, see forwardcom.info.
It's true that the ARM version doesn't have the insight that if you use reverse indexing from the end of the buffer, you can clamp the index and use it as a count for the number of items per iteration. If I'm reading the news post correctly, I think the ARM idea is that they use the lane predication flags as vector sizes, so they probably need a couple of specialized instructions to produce the correct clamped item counts and masks. But they're doing both predication and variable vector size using the same hardware, which I guess is a good thing.

Maybe they're using general-purpose integer registers as lane masks - that would explain the 2048bit size limit (64bit register = 64 lane flags * 32bit float per line = 2048bits). The load/store unit needs to know the lane masks, so they need to be known early in the pipeline, and I don't think it's using too many general-purpose register file ports.


Author: Jorcy Neto Date: 2016-12-20 07:22
Agner wrote:
Two people have sent me this link. Yesterday, ARM announced a future extension with variable vector length:
ARM Announces ARM v8-A with Scalable Vector Extensions.

The details are not available yet, but it appears that the vector length must be a multiple of 128 bits. Maybe it must be a power of 2:
Most of the HotChips yearly presentations are made publicly available at each late year. Now you can find further details on SVE at :
http://www.hotchips.org/wp-content/uplo ... 51-v11.pdf
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Proposal for instruction set - now on Github

Post by agner »

Author: Hubert Lamontagne Date: 2016-09-05 01:53
One recurrent question: How will ForwardCom run Linux's mmap function?


Author: Agner Date: 2016-09-05 05:12
You can map a virtual address space as unused, read only, write only, or read/write. In this way you can generate traps on first read and first write to a block of memory. If the mapped file is bigger than available RAM and you are writing in a random fashion then it will be more likely to flush a dirty part of the area to the file than to fragment the memory. The number of memory mapped files you can have is obviously limited by the size of the memory map.

I wouldn't recommend the use of memory mapped files with ForwardCom. Now that solid state disks are becoming popular (and bigger) there is hardly any point in using memory mapping for the purpose of minimizing disk head movements.


Author: Hubert Lamontagne Date: 2016-09-05 18:33
There's more to mmap:

- It's can be used to allocate memory, page by page (for instance as a back-end for malloc()).
- It can be used to memory map hardware devices. For instance, sometmes video device VRAM is mapped by mmap()ing /dev/mem at 0xa0000.
- It can be used for inter-procedure shared memory: if multiple programs mmap() the same file, they will actually get a mapping to the same block of RAM.
- The fork() function relies on paging to do a shallow copy of the whole memory space, and then selectively does a real deep copy of a page when one of the forks writes to it and triggers a page fault.
- Demand paging of newly allocated memory lets the OS deal with badly behaved memory allocators, for instance when the Java virtual machine allocates 512mb of ram on startup. Executable data of large shared libraries like QT can also be loaded on-demand this way.
- When RAM runs out, mmap()ed memory that is backed by disk files and doesn't need to be written to swap-space can be priorized.
- Also, mmap() also works well with SSDs.


Author: Agner Date: 2016-09-06 08:51
Hubert Lamontagne wrote:
There's more to mmap
I was not aware of these details.

Many of the applications of mmap you mention look like hacks in my opinion. A redesign of the memory management system should offer genuine solutions to these problems rather than hacks.

Malloc should be implemented in a way that matches the new memory management system.

Video ram should be accessed only through a device driver. This device driver may share video memory with an application only if the OS allows an application to monopolize the screen.

Sharing of memory between processes is described in the ForwardCom manual. It should be done in a standardized way for security reasons.

Demand paging and memory swapping to disk are separate problems that should have dedicated solutions without using mmap.


Author: Bigos Date: 2016-09-06 16:19
I don't think you will be able to remove all usages of mmap(). Memory mapping is very convenient and if a microarchitecture doesn't allow for it won't be used much for general purpose computing. Programmers are already familiar with it and created a lot of utilities that use it. For example, you couldn't port most of the general purpose OSes to such an architecture, since they are based on virtual memory (there is a special fork of linux called uclinux [1] that doesn't need an MMU, though it's specialized for embedded solutions).

For example, 3D graphics APIs have a memory buffer map functions that are used to map part of GPU memory into the process address space. Since this part is already exclusive to the application (it has been allocated previously), there is no harm in bypassing the device driver (which performs the map) and it lets the user perform reads and writes with higher performance. A solution with a shadow buffer that is copied around when needed is possible, but would reduce performance.

A mechanism used by the Mill CPU family uses a protection data that maps which memory is to be accessible by which application [2]. It's stored in a PLB (protection look-aside buffer) that, unlike TLB, is not on a critical path since it only has to prevent a failing writes from affecting other CPUs and make failing reads cause a fault - no address translation is performed. Maybe such a mechanism would be a good compromise here? The OS (or any other entity) would be able to let another one access its memory on demand.

Mill also has a memory translation table, but the TLB sits outside of the core, near a memory controller. The mapping is global, not per-process (no TLB flush on context switch needed) and does the translation only when an access goes to the memory, at which point the latency is already high so making the TLB fast is not really that important. All the data in the caches is virtually addressed, virtually tagged.

But I repeat myself here. We could come up with a better solution instead, though it doesn't appear that it will be simple.

Author: Hubert Lamontagne Date: 2016-09-06 22:41
Ok I checked, there are 2 OS calls to allocate RAM in Linux: sbrk() and mmap(). They serve as a back-end to malloc(). The "heap" segment shown on some Linux memory map diagrams is the section allocated by sbrk(), and the rest of user-allocated ram is allocated using mmap().

The current default implementation of malloc() is ptmalloc2 and it uses both sbrk() and mmap(), but in different scenarios:
- Small to medium allocations on the main thread use sbrk(), because it doesn't matter too much if some blocks get free()'d later on since the allocator can still reuse the leftover holes for allocations of the same size.
- Large allocations on the main thread use mmap(), because sbrk() can only reclaim free()'d memory in the exact reverse order that it was allocated, so that a large allocation can easily become impossible to reclaim if it's followed by small longer-lived allocations.
- Allocation on other threads use mmap() instead of sbrk(), because only mmap() is thread-safe. Also, using sbrk() would create a series of segments that are used by different threads on the heap, which would make it hard to reclaim memory until all the threads have finished running.


Author: Agner Date: 2016-09-07 00:10
Hubert Lamontagne wrote:
Ok I checked, there are 2 OS calls to allocate RAM in Linux: sbrk() and mmap().
Thanks for the clarification. sbrk would be easy to implement in ForwardCom. And it could be thread safe because each thread has its own memory map and its own private memory in ForwardCom. Everything that causes a heavy fragmentation of memory for the same thread is a no go. In some applications, the limitation can be circumvented by making multiple threads - one for each block of shared memory.


Author: Hubert Lamontagne Date: 2016-09-07 17:18
Agner wrote:
And it could be thread safe because each thread has its own memory map and its own private memory in ForwardCom.
Ah, but if your threads don't share memory, then they're not threads, they're separate processes, which is a different thing!


Author: Agner Date: 2016-09-08 00:39
Hubert Lamontagne wrote:
if your threads don't share memory, then they're not threads, they're separate processes, which is a different thing!
ForwardCom has a special security feature for threads. A process can have a parent thread and several child threads. All child threads have access to the memory of the parent thread. The stack, heap and thread-local storage of a child thread is private by default, not accessible to its parent and siblings. This can be useful if the process is servicing several clients. Communication between threads - e.g. a semafor - must use the parent's memory space.

Thread memory is shared only if this is necessary for compatibility with legacy software.


Author: Hubert Lamontagne Date: 2016-09-08 17:24
Agner wrote:
ForwardCom has a special security feature for threads. A process can have a parent thread and several child threads. All child threads have access to the memory of the parent thread. The stack, heap and thread-local storage of a child thread is private by default, not accessible to its parent and siblings. This can be useful if the process is servicing several clients. Communication between threads - e.g. a semafor - must use the parent's memory space.

Thread memory is shared only if this is necessary for compatibility with legacy software.
If you want to port anything that uses threads on a POSIX OS (which all use the pthreads library) or on Windows, the expected behavior is that all threads share the same memory yes. For instance, if multiple threads can modify a std::vector (protected by a mutex), then if the vector changes size and its memory is reallocated, all threads have to see the newly allocated memory regardless of which thread caused the size change. Since std::vector memory allocations ultimately use malloc(), this means that malloc() has to return memory which is visible to all threads by default.


Author: Commenter Date: 2016-09-07 19:28
I don't consider myself well informed on this topic, so please excuse me for any mistakes.
there is hardly any point in using memory mapping for the purpose of minimizing disk head movements
I've actually never heard of mmap being useful for that purpose? mmap is useful for reading files as it avoids an extra memory copy. When a file is mapped, the userland application essentially can read directly from the kernel's disk cache buffer. On the other hand, with standard read() calls, the kernel has to make an additional copy into the userland supplied buffer. This works similarly for write() calls.
As I/O gets faster, reducing this kind of overhead becomes more important.

In fact, I'd imagine that with faster NVRAM style devices, mmap may allow the file to be read directly off the I/O device without ever being read into system RAM.


Author: Bigos Date: 2016-09-08 13:04
Commenter wrote:
In fact, I'd imagine that with faster NVRAM style devices, mmap may allow the file to be read directly off the I/O device without ever being read into system RAM.
Actually, this is what Linux DAX infrastructure does. It allows you to map nvram directly to the process's address space.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Paging

Post by agner »

Author: Kurt Baumgardner Date: 2016-09-09 23:40
I have received this mail from Kurt Baumgardner and I think it belongs on the forum so that others can join the discussion:

Hi Agner,

My name is Kurt Baumgardner, and I've been reading your processor design draft. Very impressive! Quite obviously a labor of love. You've done your research, which is also obvious from the rest of your site. I highly respect your work and fndings.

Please understand, I don't mean to be critical. But I had to mention my worry about the decision to drop paging from the memory model, which I think is going to be missed. In a handful of 'controversial' choices, you provided a logical rationale for going with one method vs. another, but, either I missed it, or you didn't go into detail about why no paging. I think it is related to your application memory map model, which, if I read correctly, is a simpler (and possibly tighter) app-based, "just what's needed" model. Such a model runs counter to a typical paging scheme, where most mallocs receive pages vs. arbitrary-sized blocks. I will be the first to admit that I must read that section a few more times And, even then, it's a deep subject to follow.

Regardless, I feel that, from what I understand so far, the memory model might be a bit too simple. You described an idea of the compiler calculating the needed stack size, and/or profiling to attempt to determine the needs. And, this is made easier with the 2 stack approach (which I really like). I understand this pre-calculation approach for optimization, but is its purpose also to help reduce complexities that would make your memory model seem less atractive? (I'm not sure if I described that understandably).

Let me take a step back. My biggest issue/fear/disappointment is with the removal of paging. It always nice to fit everything in memory (64Mb is enough for anyone, right? , but you can't always do it. With all the ideas of future compatibility, like with software automatically taking advantage of larger vectors, the loss of paging seems like a huge step backwards, as it seriously limits machines without large amounts of installed memory. I know of a small percentage of server apps that allocate memory with disregard for actual physical memory size, depending on and expecting the paging hardware and software to handle the otherwise messy details of managing extremely large datasets.

In my opinion, 2 of the most important inventions of computing are: #1: using free memory as a disk cache, and #2: built-in automatic paging. There's really no other practical way to provide apps the freedom to allocate memory as needed. Apps have no business trying to home bake their own paging scheme.

Without automatic paging, all the work you did for future compatibility is wasted. Why? Because developers will not write the big programs that could take advantage of future improvements, because they're afraid of running out of memory!

I know the goal is to simplify the system, for elegance, for cost, for power efficiency, for performance, and many other good reasons. But, your future chip must at least be able to support the hugely-successful and useful capabilities of today's processors. The loss of paging cripples the system severely as a whole.

Or, maybe I'm missing something huge. With the current spec, would you simply have to crash a launching (or running) program with an "Out-of-memory" error? Memory is handled so well currently that many programmers don't even test for the possibility of memory allocation to fail (which is awful, but also amazing that they can get away with it).

Thanks for your time reading, and, if you have a minute or two, please let me know what you think, or your rationales, or your thoughts. If you wish, this text could be placed into your forum. Have a nice day!

By the way, I *really* like the idea of programmer-built extensions to the instruction set. I would love to have the following capabilities:

1. Software gate connection (direct gate array access).
2a. Microcode programming (if so equipped), or
2b. the ability to control load/store, ALU, bit shift, cache control, vectoring, etc hardware.
3. An on-chip cache dedicated to user instructions. Could be used for extremely fast lookups/translations, pre-calculated math, etc.

(one can dream

Kurt


Author: Agner Date: 2016-09-10 00:18
Kurt Baumgardner wrote:
my worry about the decision to drop paging from the memory model,
The current systems have a very complicated memory paging systems with huge two-level page tables which are partially cached in the on-chip TLB. This system is complicated and using a lot of resources.

My goal with ForwardCom is to see what can be gained by a complete redesign. The paging system seemed an obvious candidate for simplification so I got the idea of replacing the huge number of fixed-size pages with a small number of variable-size pages. It was just an idea that I wanted to discuss, and as you can see it has already generated a lot of discussion.

This memory map system will only work if we can avoid heavy memory fragmentation, but if it works it will be a great advantage. The memory map will be so small that it can be contained entirely on the chip. You can have a separate memory map for each thread and you can easily switch memory map on system calls and task switches.

An application can still allocate memory dynamically, but preferably in a way that doesn't cause too much fragmentation. If an application allocates a huge amount of memory, not knowing how much it will actually need, then the OS may just allocate virtual memory in the first place. When physical memory is needed, it will be allocated in blocks of exponentially growing sizes (each new block will be bigger than the previous one) in order to minimize memory fragmentation. As a last resort, the system may move data around in memory if a contiguous virtual memory space has become mapped to a too fragmented physical memory. Moving data in memory may be fast because the data path to cache (and to main memory?) can be as wide as the maximum vector length.

The advantage of having variable size memory map entries is that you will need much fewer entries. The disadvantage is that you need a lot of comparators to check all entries in one or a few clock cycles. The actual number of entries we will have depends on the hardware implementation and this is subject to experimentation. A well behaved application that causes little or no memory fragmentation will need very few memory map entries. So a programmer who makes a performance-critical application should exercise a little discipline to avoid excessive memory fragmentation.


Author: Hubert Lamontagne Date: 2016-09-11 13:40
I guess the system is somewhat close to software TLB schemes like on the MIPS. The difference is, of course, that on software TLB schemes, pages must have 2^N sizes and be aligned. The advantage is that you don't need a full comparator per TLB entry and a Content-addressable memory is sufficient, which means a larger TLB is possible - MIPS classically had a 64 entry TLB.

Variable page size is not that well established as a technique: large pages don't work that well with copy-on-write techniques, they don't deal well with fragmentation of memory, larger pages tend to be split up by operations like mmap / munmap / mprotect... Generally, memory contiguity is a limited ressource, with contention not only at program-scale but also at global OS-scale, and the general response seems to have been to keep using 4k pages and increase the size of TLB caches.

When you're adding multiple page sizes, you're introducing some hard NP-complete problems to the OS's page allocator, since it can't just have a list of available free pages that it distributes to programs willy-nilly, and you have to have all sorts of page-contiguity preserving algorithms. And I'm pretty sure that it's often not practical to allocate memory by enlarging already allocated pages because this only works if the contiguous memory is unallocated, which becomes not only a program-scale problem but an OS-scale problem.

Moving data in memory is NOT a good idea IMHO, no matterr how fast you can copy. Especially for applications like games, which can use hundreds of mbs of RAM and CANNOT have 5ms and 10ms stalls.


Author: Kurt Baumgardner Date: 2016-09-13 00:12
I don't think it's possible to both be thrifty with memory, and prevent memory fragmentation, from an application programmer's point of view. You can just allocate a huge heap at the start of your program, and do all your work in that heap, thereby preventing fragmentation. Or you can just use what's currently needed, releasing memory back to the OS, but creating the potential for fragmentation. As a programmer, personally I don't want to have to think about how the OS gives me my memory, and I want the OS to make use of memory I've released. And, in today's OSes, the method for providing memory to programs is a magic black box - you ask, and the OS gives. You really don't have a lot of control of how that occurs.

It would be nice to be able to provide hints: "I need a temporary chunk of memory that I'll give back soon", or, "I'll be needing a contiguous chunk of X size for a while."

I assume we're talking about a replacement for X86/X64, for servers and desktops. I can symphathize with the goal of optimizing, and wanting to keep everything simple, and that's a noble cause. But, before making it optimal, it has to work first (and by that, I mean it has to be practical, and be able to handle today's apps.)

Consider a web browser. Navigate to a text page (or a forum), and that browser may need a couple of Mb of memory. But, then navigate to an image-laden page, and BAM! You now need dozens of Mb to render all those images into. Some browsers delay the rendering of JPGs, for example, until you actually scroll down to where the JPG is visible. And if you keep scrolling, they toss that rendering from memory.

My point is that, some programs have very complicated memory allocation needs, and that does not suggest that they are written poorly. Those web browser designers are also optimizing, but they are concerned with memory usage, load time, render speed, and much more. Forcing them to allocate memory a different way limits their abilities. And, without paging, they are forced to be thrifty with memory. Which causes fragmentation. Full circle.

If this chip is to sit in people's homes across the world, you've got to expect that many thousands of different programs will be running on them. And each program will be built by people with varying levels of skill. People who will not respect the beauty behind the memory organization scheme. And, you have to cater to them, because if their program brings the system to its knees, your chip and the OS will be blamed.

In other words, some crappy app should be able to disrespect the entire memory space...and your chip and OS should handle it with some grace, I believe. And, even trying to be respectful of the machine's resources seems difficult, as I am torn between releasing no-longer-needed memory (which is supposed to be the right thing to do), and trying to prevent fragmentation (which is, essentially, the opposite, unless I've missed something).

Back to the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:

1. 16Gb memory on the same chip as the CPU. I don't know what the limitations are, but, if all your PC's memory could be super-fast, on-chip, wow! Memory wait states slow down the CPU a lot, and, ,if it was all on the same chip, you could eliminate all the complex nightmare caching hardware.

2. If not #1, then some really good cache hint/directive instructions.

3. Instead of relying on branch prediction, why not take both branches, and provide the ability to swap pipelines to use the confirmed branch? This dual pipeline could be used for extra execution when branching was not occurring.

4. A hardware block move/fill/swap, page-based. Runs like a background job. An instruction could be used to test for completion by comparing a given address against each pending job's address range.

5. Instead of saving registers on task switch, use an array of registers, indexed by taskid.

I'm sure I could think up some more. As I stated before, I realize that most of these are far-fetched, and do not really fit into Mr. Fog's design. But, maybe one has merit, so I present them for thought. I find the whole project fascinating, and I hope one day that you can get chip to the fabrication stage! Best of luck!


Author: Agner Date: 2016-09-13 00:39
Kurt Baumgardner wrote:
And each program will be built by people with varying levels of skill. People who will not respect the beauty behind the memory organization scheme. And, you have to cater to them, because if their program brings the system to its knees, your chip and the OS will be blamed.

In other words, some crappy app should be able to disrespect the entire memory space...and your chip and OS should handle it with some grace, I believe.
Some performance critical programs are written by highly skilled programmers who want to tweak as much performance out of the system as possible. Depending on the application, they may be able to predict how much memory they will need and allocate it all at once. Such applications would only need a handful of memory blocks which could easily be handled by an on-chip memory map. Other programs, probably less critical, are made with point-and-click tools and abstract frameworks by people who have no idea what is going on behind the scenes. They may cause heavy memory fragmentation without even knowing it. Maybe we can have a dual system with an on-chip memory map for well-behaved applications and a partially software-based paging system which will be activated only if the memory becomes too fragmented. Programmers who make well-behaved applications will be awarded with superior performance.
Back to the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:

1. 16Gb memory on the same chip as the CPU. I don't know what the limitations are, but, if all your PC's memory could be super-fast, on-chip, wow! Memory wait states slow down the CPU a lot, and, ,if it was all on the same chip, you could eliminate all the complex nightmare caching hardware.

2. If not #1, then some really good cache hint/directive instructions.

3. Instead of relying on branch prediction, why not take both branches, and provide the ability to swap pipelines to use the confirmed branch? This dual pipeline could be used for extra execution when branching was not occurring.

4. A hardware block move/fill/swap, page-based. Runs like a background job. An instruction could be used to test for completion by comparing a given address against each pending job's address range.

5. Instead of saving registers on task switch, use an array of registers, indexed by taskid.

I'm sure I could think up some more. As I stated before, I realize that most of these are far-fetched, and do not really fit into Mr. Fog's design.
These ideas are not farfetched, and some have already been implemented in various systems. Swapping register banks was even supported in some processors back in the 1980s. Putting RAM on the chip is an obvious thing to do for the less memory-hungry applications. The more RAM you put on the chip the slower it will be, so you need one or more levels of cache in between anyway.


Author: Kurt Baumgardner Date: 2016-09-13 19:08
Agner wrote:
Kurt Baumgardner wrote:
And each program will be built by people with varying levels of skill. People who will not respect the beauty behind the memory organization scheme. And, you have to cater to them, because if their program brings the system to its knees, your chip and the OS will be blamed.

In other words, some crappy app should be able to disrespect the entire memory space...and your chip and OS should handle it with some grace, I believe.
Some performance critical programs are written by highly skilled programmers who want to tweak as much performance out of the system as possible. Depending on the application, they may be able to predict how much memory they will need and allocate it all at once. Such applications would only need a handful of memory blocks which could easily be handled by an on-chip memory map. Other programs, probably less critical, are made with point-and-click tools and abstract frameworks by people who have no idea what is going on behind the scenes. They may cause heavy memory fragmentation without even knowing it. Maybe we can have a dual system with an on-chip memory map for well-behaved applications and a partially software-based paging system which will be activated only if the memory becomes too fragmented. Programmers who make well-behaved applications will be awarded with superior performance.
Believe me, I like the idea of the super-fast memory map, and the dual system idea is interesting. I appreciate a system that remembers to give the programmer the keys to the kingdom, so to speak. Cause most systems provide a dumbed-down, common-denominator approach. But a dual system? That makes the system that much more complicated, doesn't it.? Maybe OS drivers and OS-protected low-level stuff could use the on-chip stuff exclusively. Otherwise, I'll be pushing to have my Super Zombie Crusher 3 pushing OS stuff out to let my 100-fps renderer live on-chip, right?!! What programmer would choose the slow model? I don't know - the more I think about it, the more I lean towards the traditional model. Yes, it's slow and complex. But, that's because it's necessary, to get the power you want and need. I think Intel got it right, somewhat...it's hard to deny. Simplifying it upfront leads to more complex schemes down the road. To me it seems inevitable. Now, please reconsider on-chip mass memory, which could alleviate this issue and others - see below.
Backto the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:

1. 16Gb memory on the same chip as the CPU. I don't know what the limitations are, but, if all your PC's memory could be super-fast, on-chip, wow! Memory wait states slow down the CPU a lot, and, ,if it was all on the same chip, you could eliminate all the complex nightmare caching hardware.

2. If not #1, then some really good cache hint/directive instructions.

3. Instead of relying on branch prediction, why not take both branches, and provide the ability to swap pipelines to use the confirmed branch? This dual pipeline could be used for extra execution when branching was not occurring.

4. A hardware block move/fill/swap, page-based. Runs like a background job. An instruction could be used to test for completion by comparing a given address against each pending job's address range.

5. Instead of saving registers on task switch, use an array of registers, indexed by taskid.

I'm sure I could think up some more. As I stated before, I realize that most of these are far-fetched, and do not really fit into Mr. Fog's design.
These ideas are not farfetched, and some have already been implemented in various systems. Swapping register banks was even supported in some processors back in the 1980s. Putting RAM on the chip is an obvious thing to do for the less memory-hungry applications. The more RAM you put on the chip the slower it will be, so you need one or more levels of cache in between anyway.
On the subject of massive on-chip memory: It's slow. Ok. But, how slow?. I really want to see the numbers on this one. Maybe it requires a new technology. But, let's assume it's possible. Think about it. Low fixed wait states. 0 cache misses. No complicated cache hardware - this is a big one. Or, that pipeline burst cache hardware instead force feeds the execution pipeline with instructions. Cache misses happen a lot. And, surely it's going to be faster than external memory. Is it expensive, money-wise? Is that why we don't see it being attempted Maybe you could spread the memory across multiple cores.

Some algorithms simply cannot avoid reading memory "vertically", such as 90-degree block rotation algortihms. These functions can suffer a cache invalidation on each read, unless proper cache hints are provided...yielding a needlessly slow read/write.

To me, cache misses and branch mis-prediction are the 2 things that prevent us from being able to optimize our code accurately, cause we cannot determine how much time each instruction will take, therefore we cannot pair instructions perfectly. Knowing exact memory timing allows the pipeline to be finely-tuned, I'd imagine. Removing the cache hardware provides fast chip real estate to put RAM, and simplifies the design enough to justify research into making it possible.

And, your memory map could sit in this same main memory too.


Author: Hubert Lamontagne Date: 2016-09-13 12:12
Kurt Baumgardner wrote:
Back to the subject of dreaming about my ideal CPU: These are not well thought out, or even practical, most likely. But they'd be nice Here goes:

1. 16Gb memory on the same chip as the CPU. I don't know what the limitations are, but, if all your PC's memory could be super-fast, on-chip, wow! Memory wait states slow down the CPU a lot, and, ,if it was all on the same chip, you could eliminate all the complex nightmare caching hardware.
http://www.zdnet.com/article/why-doesnt ... n-the-cpu/
http://electronics.stackexchange.com/qu ... e-cpu-chip
2. If not #1, then some really good cache hint/directive instructions.
Generally, this is done with instructions like PREFETCH on x86.
3. Instead of relying on branch prediction, why not take both branches, and provide the ability to swap pipelines to use the confirmed branch? This dual pipeline could be used for extra execution when branching was not occurring.
A lot of branch instructions that your CPU pipeline will see are in loops and generally branch the same way a lot of times in a row, and would actually get slower if you ran both sides, because rarely executed paths would be competing for resources. Here's a random C++ example:

Code: Select all

for(int i=0; i<200; i++) {
float sample = buffer[i];
mFilterMem += (sample - mFilterMem) * mFilterParam;
sample = mFilterMem;
if(mVolumeFade) {
sample *= mVolume;
mVolumeFade += mFadeRate;
}
buffer[i] = sample;
}
The branch at the end of the loop (i<200) will be taken 199 times in a row, and only skipped once. Speculatively executing the code after the loop makes no sense in this case. In addition to this, the branch in the middle of the loop (if(mVolumeFade)) will either always be taken or never be taken in the 200 iterations, so in that case speculatively executing both sides of the loop doesn't make sense either, and it's better to trust your branch predictor.

Speculatively executing both sides is generally something you'll only see in very large and complex out-of-order CPU cores, and I think it probably involves something like a 3 or 4 way branch predictor that predicts NO-JUMP/MAYBE-JUMP/YES-JUMP or NO-JUMP/MAYBE-NO-JUMP/MAYBE-YES-JUMP/YES-JUMP. I don't think it handles multi-way branch prediction too well either (ie jump to function pointer... some modern cores can handle multiple jump targets like this).
4. A hardware block move/fill/swap, page-based. Runs like a background job. An instruction could be used to test for completion by comparing a given address against each pending job's address range.
Sounds a lot like a DMA controller - and if I'm not mistaken, the x86 DMA controller can be used for that, but I don't think it runs fast enough to get a speed gain. And it kinda competes with the CPU for memory access cycles, which isn't too good, although I guess you could make it operate in the unused cycles.
5. Instead of saving registers on task switch, use an array of registers, indexed by taskid.
I'm sure I could think up some more. As I stated before, I realize that most of these are far-fetched, and do not really fit into Mr. Fog's design. But, maybe one has merit, so I present them for thought. I find the whole project fascinating, and I hope one day that you can get chip to the fabrication stage! Best of luck!

One that's worth it and that is used on ARM is to have 2 stack pointer registers, one for USER mode and one for OS mode, and switch when interrupts happen, because this makes interrupt handlers faster. Fast interrupt handlers are generally nice to have because then it makes stuff like software TLBs realistic, and it speeds up OS calls. Fast task switch between user programs is a bit less important because you're going to have tons of cache misses anyways, so not having to save/restore registers isn't as important (and a lot of programs can run at the same time, which makes it unlikely that you can hold all register sets at the same time).


Author: Kurt Baumgardner Date: 2016-09-14 16:56
Thanks, guys - great, informative, interesting answers. I wish I knew more about the architecture. I would like to come up with a way to mitigate the costs of a pipeline flush. It seems like such a shame of a waste when it happens. As good as modern branch prediction is, a lot of that benefit of smart prediction is thrown away. Maybe, while taking both branches, another predictor would realize that this wrong branch was being repeatedly taken, and would instead store an image of the pipeline that could be switched to when the misbranch was actually taken, and the other context could be flushed in the background.

I think I would pull my hair out trying to design such a thing. @Agner - I think you're on the right track avoiding microcode, keeping the design simple, and seeing just how well it can be made to perform. Best of luck.


Author: Hubert Lamontagne Date: 2016-09-11 17:54
Kurt Baumgardner wrote:
By the way, I *really* like the idea of programmer-built extensions to the instruction set. I would love to have the following capabilities:

1. Software gate connection (direct gate array access).
That sounds a lot like a CPU+FPGA combo... That would be a nice thing to have, I'd say the challenge would be that it's hard to share an FPGA between multiple programs. The usual strategy is to save/restore the state on task switch, but that's hard with a FPGA, since there's a lot of setup data, and it's hard to save/restore all the registers and block RAMs. You could try to share parts of the FPGA but that sounds very hard to do... though I guess you could split it in N units with a bunch of gates each, and each application would take up some number of units. Another model would be how video cards are shared.
2a. Microcode programming (if so equipped), or
That's a bit hard if most of the operations on your CPU are already single-microcode, like on a RISC. For a machine with combined memory+arithmetic instructions like x86 or ForwardCom, I guess you could make the combinations of memory-op + arithmetic-op user-configurable, or offer configurable multi-step ops like multiply-accumulate. You could also separate the address calculation and the load/store operation, as long as you don't add new register file writes/reads on the intermediary steps - you could have hidden extra registers that are only visible by sub-operations of your instruction, which wouldn't have to be renamed/loaded/stored to the main register file.
2b. the ability to control load/store, ALU, bit shift, cache control, vectoring, etc hardware.
VLIW cpus do something a lot like that. You basically specify which operation each part of the CPU is doing. They haven't really caught on yet because they have trouble dealing with things like memory store instructions needing to have the target address known early (so that program order in loads/stores can be preserved without requiring the compiler to do ultra-aggressive alias analysis) but the data to be written has to be known late (so that the CPU has time to calculate it).
3. An on-chip cache dedicated to user instructions. Could be used for extremely fast lookups/translations, pre-calculated math, etc.

(one can dream

Kurt
Hmmm... that might not be a bad idea, though it would make the issue hardware of the CPU appreciably more complex (so it would make branch mispredictions more costly). It would be useful to implement CISC instructions on a RISC base unit for instance... basically it would make a good cache for implementing the 2a proposition.


Author: Kurt Baumgardner Date: 2016-09-13 15:51
Hubert Lamontagne wrote:
Kurt Baumgardner wrote:
By the way, I *really* like the idea of programmer-built extensions to the instruction set. I would love to have the following capabilities:

1. Software gate connection (direct gate array access).
That sounds a lot like a CPU+FPGA combo... That would be a nice thing to have, I'd say the challenge would be that it's hard to share an FPGA between multiple programs. The usual strategy is to save/restore the state on task switch, but that's hard with a FPGA, since there's a lot of setup data, and it's hard to save/restore all the registers and block RAMs. You could try to share parts of the FPGA but that sounds very hard to do... though I guess you could split it in N units with a bunch of gates each, and each application would take up some number of units. Another model would be how video cards are shared.
I did not consider multiple programs wanting to use the FPGA - that *is* an ugly issue. Maybe it could be treated like a device, and programs would have to make a reservation, and obtain an exclusive lock for, at least, a portion of the array. I'm not sure there's a good answer there. But, you're onto something: It could be similar to the extra processing offered by GPUs. The reason it works as well as it does for GPUs is that, typically, only one program is using the GPU at any given time (when rendering games, which is, of course, the most visible use).


Author: Agner Date: 2016-09-14 00:45
Kurt Baumgardner wrote:
Believe me, I like the idea of the super-fast memory map, and the dual system idea is interesting.[...] What programmer would choose the slow model?
The OS will choose for you. If your application is well behaved and avoids memory fragmentation then you will use the on-chip memory map. If you allocate and deallocate huge amounts of memory randomly so that the memory becomes fragmented then the software based memory paging will be activated when the on-chip memory map is filled up. If the user opens and closes a lot of memory hungry applications so that there is no contiguous space to open the next program in then the OS may issue a warning and give the user the option to close some of the applications or run a garbage collector that moves some memory blocks around. The programmers will learn that if they want top performance they should exercise some discipline in memory allocation. I believe that most applications will be able to run without memory fragmentation.


Hubert Lamontagne wrote:
A lot of branch instructions that your CPU pipeline will see are in loops and generally branch the same way a lot of times in a row, and would actually get slower if you ran both sides, because rarely executed paths would be competing for resources. Here's a random C++ example
The branch predictor will know if a branch is easily predictable. It could select the branches with the worst prediction rate for executing both sides speculatively. In your example, an optimizing compiler may recognize that the inner branch is loop invariant and move it outside the loop, giving you one loop in each branch. The branch predictor should be able to get perfect prediction for a loop with constant count.


Kurt Baumgardner wrote:
16Gb memory on the same chip as the CPU
Hubert Lamontagne wrote:
This is an obvious thing to do. I believe that we will see a lot of on-chip RAM some day when technology allows it. It may be SRAM. Small system-on-a-chip microcontrollers already have both SRAM and Flash on the chip. ForwardCom may be implemented in this fashion for some dedicated applications.


Kurt Baumgardner wrote:
I *really* like the idea of programmer-built extensions to the instruction set
You may have an FPGA on the chip. An application should be able to monopolize this FPGA, or at least a part of it. I am trying to avoid microcode completely in ForwardCom.


Author: Jorcy Neto Date: 2016-09-18 13:55
Agner wrote:
Kurt Baumgardner wrote:
16Gb memory on the same chip as the CPU
Hubert Lamontagne wrote:
This is an obvious thing to do. I believe that we will see a lot of on-chip RAM some day when technology allows it. It may be SRAM. Small system-on-a-chip microcontrollers already have both SRAM and Flash on the chip. ForwardCom may be implemented in this fashion for some dedicated applications.
Actually, they've (Intel) already been studying the performance gains and power issues regarding logic+DRAM stacking quite since mid-2000's (see: "Die Stacking (3D) Microarchitecture" here http://www.cs.cmu.edu/afs/cs/Web/People ... lack06.pdf )
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

A null register?

Post by agner »

Author: csdt Date: 2016-09-23 12:55
Very interesting project.

What if there were a special named register (eg: v0) with a special meaning:
- v0 always contains zeroed bits
- using v0 as input or output doesn't create any dependency.
We could have the same for r0.

This would be used to quickly get a zero value without creating any false dependency. I really don't like the x86 xor hack.
It could be used with any instruction, and can provide an easy way to compute some stuff in one single instruction without having a new instruction for it:
- negation: sub.f v1, v0, v1
- bitwise not: and_not v1, v0, v1
- getting -1 (int): and_not v1, v0, v0
- getting -0.0: set_bit v1, v0, $31
- compare to zero: compare v1, v1, v0, $whatever_the_comparison_is

It may also be used to discard a result:
- prefetching: move v0, [ r1, length=16]

The main drawback I can see is a register name wasted. It would also make the register renaming a bit more complicated (I think), but not so much afaiu.

I would be pleased to know what do you think of this.


Author: Agner Date: 2016-09-24 00:25
Some instruction sets have a null register. But, as you say, it is a register name wasted.

ForwardCom uses the principle of a null register in three cases:
  • Specifying r0 or v0 as a mask means no mask and unconditional execution.
  • Specifying the stack pointer (r31) as an array index means no index.
  • Specifying the stack pointer (r31) for vector length means a scalar.
There is no need for a register with the fixed value zero because a constant can be embedded in the instruction, and this constant can be zero or any other value.

Tiny instructions (half size) can have a 4-bit signed constant. Single size instructions (32 bits) can have a constant of 8 or 16 bits. Larger instructions can have constants of 32 bits (support for 64 bits constants is optional). This makes it possible to set a register to zero or any other simple constant without increasing the size of the instruction. There are various methods for compressing larger constants, and it also works for floating point values.


Author: Hubert Lamontagne Date: 2016-09-24 09:39
The nice thing about a null register is that it lets you remove some opcodes. MIPS does this:

- There is no actual MOV instruction. It's actually replaced by "OR $rd, $rx, $rx", and "ADD $rd, $zero, imm", and "LUI $rd, $zero, imm".
- This lets you do absolute addressing: "LW $rd, offset($zero)".
- NEG is of course "SUB $rd, $zero, $rx".
- NOT is "NOR $rd, $zero, $rx".
- You can store 0 to memory directly: "SW $zero, offset($ra)".
- Comparison instructions like BEQ, BEQL, BNE, BNEL don't have an immediate field (since it's used by the branch offset) but can still compare to 0, which is a very common value to compare with.
- As previously mentioned, values can be discarded by writing them to $zero (I'm not sure what's the point on MIPS though since it has an actual PREF instruction).

Generally, whether or not this is a gain tends to depend on register file size:
- SuperH and x86 have 8 registers so the 8th register is way more important than a hardwired zero.
- ARM has 16 registers and has opted not to have a zero register either (though it does have the PC mapped to a register for some reason).
- MIPS and clones such as RISCV, ALPHA, ARM64, PA-RISC have 32 registers and C++ compiler register allocators rarely fill all 32, so the very slightly reduced number of opcodes is a small but useful gain. Note that ARM64 uses the same register name for the stack pointer and zero register (which one is used depends on the opcode). One of the downsides of a zero register is that it does make software emulation more complex (since you have to check that operation destinations aren't the zero register). I also wonder if this has made out-of-order register renamers more complex or not. And considering that ForwardCom has a ton of opcodes already, I don't think removing a few ones with a zero register has that much impact anyways.


Author: csdt Date: 2016-09-26 03:43
Thank you for your hindsight.

Is it possible to use any vector instruction with an immediate scalar broadcasted?
In my mind, that was a use case for the null register.

Discarding the result is not so interesting with ForwardCom as there is no instruction with 2 outputs.
Otherwise, It could have been interesting for the division (discards the remaining).


Author: Agner Date: 2016-09-27 01:21
csdt wrote:
Is it possible to use any vector instruction with an immediate scalar broadcasted?
Yes. The immediate value can have different sizes, integer or float or a small signed integer converted to float.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Indexed registers

Post by agner »

Author: Kurt Baumgardner Date: 2016-09-26 22:00
Ok, here's an idea I haven't heard of: Set up some registers: ProgramID, ThreadID, and ContextID. These are controlled by the CPU and set up by the OS when spawnning a program or thread. The app programmer only has access to ContextID. The bits of each of these combine to address a memory buffer for the CPU registers. A thread or program task switch simply indexes a different set of registers - no need to preserve registers between task switches. You could even include some instructions for register exchange (EXX) that simply change ContextID. Maybe this mechanism could also supply register renaming functionality...?

The actual size of the registers is implementation specific, it just depends on how the address bits are devied up. This supports the variable vector registers. Of course this would have to be a very fast memory. The idea came from the Page 0 stuff you could do on the good ol' 6502, where the first page of memory could be addressed with 1 byte.

If the actual registers were separate, and the register page could be copied from/to the real registers, you could use ContextID to fill all registers at once,, for very fast function initialization. The compiler could place instructions at program start that would fill up register contexts for critical functions, and call up that register set when entering each function. There's a lot of possibilities. It all depends on the ability to read from/write to this register page very quickly, and the ability to put enough of this memory on the chip. The amount of memory would limit the number of concurrent programs, threads and contexts, as well as the number and size of registers. Maybe you'd have to consider a program and a thread the same thing, which could reduce the size of this memory somewhat.

If done right, I think this could really speed up task switches and function initialization and save stack space. Another idea could be a partial register exchange. ProgramID and ThreadID (or just ThreadID) point you to the proper register bank, but a special command could cause, say R0 thru R15 to always read from ContextID 0, but R16 thru R31 would read from the Context set by this special bank switch command. For a 2-bit ContextID, that provides you another 64 registers to use, 1 bank of 16 at a time. So R0 thru R15 stay the same, but R16 thru R31 are pulled from 1 of 4 banks. Or you do the full context switch, for 32 out 128.

I'm not sure how this would affect other processes, but on the surface, it sounds powerful.

Please note that I am describing 2 opposing possible setups:
Setup #1: The registers live in this memory block, and are one and the same.
Setup #2: The registers are read from this memory block with a GetContext command, and written to the memory block with a PutContext command.

I see benefit in both setups. Unfortunately, I think either setup might have performance problems, unless some very specialized burst read/write hardware is built. But it could be a joy to program. Imagine placing all your immediates and constants into a register context once at the start of your program. Then, when you call your function, a single command sets up all your constants, your loop counts, and your buffer pointers, without using the stack, or even setting registers. Compilers would have to be a bit different, but we're trying for a better design, right?
Could be interesting. You might even be able to justify reducing your number of registers, and depend on bank swapping, which would reduce your instruction set considerably.

Maybe a routine could "push it's context" to a function to achieve by-reference registers. Or copy it's context into another for by-value, without using the stack, or explicitly setting registers.

Is it a crazy idea? It is complex, but could be very powerful. I wonder if it is practical.

Ok, here's my last idea: If the above cannot be done, how about an instruction that pushes registers onto the stack, based on a mask. The push-by-mask opcode is followed by an immediate 32-bit value, with 1 bit dedicated to each register. Registers are always pushed in the same order. To push r0 thru r7, you'd use an immediate value of 011111111b. Or, to push r2, you'd use 0100b. I guess this would be slow, but it would be compact, and useful. Pop would, of course reverse, so the same mask could be used.


Author: Agner Date: 2016-09-27 01:46
My proposal has some features for speeding up context switches, but not quite as radical as you are proposing.

The memory map is so small that it can be contained on the chip. There are separate memory maps for system code. In benign cases where a process uses only a small part of the total memory map area on the chip, you may swap memory maps when switching processes.

All code is addressed relative to the instruction pointer and all read/write memory is addressed relative to the data pointer and the stack pointer. This means that everything is addressed relative to just three registers which can quickly be changed. There are separate registers that are only accessible to system code. These can be useful in a device driver. The device driver needs access to the registers of the application program because these registers (actually, only half of them) can be used for parameter transfer to the device driver. The extra registers can be used by the device driver for temporary storage of the application's registers.

I interpret your Setup # 1 as something like register banks that can be swapped. This may be added as an optional feature on ForwardCom, but I don't know if the costs in terms of central silicon space can justify it.

Your setup # 2 is reading and writing all registers to some memory area or cache. I have avoided an instruction for "read all registers" or "write all registers" because I want to avoid complex micro-coded instructions and because these instructions are very slow on x86 (I don't know about other architectures). Maybe a special on-chip cache for the sole purpose of saving registers on context switch would be useful.


Author: Kurt Baumgardner Date: 2016-09-27 21:49
Makes sense. Yeah, the only way setup #2 would be worth it is if the registers could all be stored in a single cycle, like a latch, which would require an unconventional memory with a very wide data bus. Probably not practical. My goal has been to present new ideas, and let you smarter guys consider the possibilities and practicality. If this ability could be built, it would radically change the way functions are built, and executed, and it has support for your variable vector widths, which made it seem like a nice fit. I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup. It would be something like inlining your whole program. Oh well :) Your current setup seems like a nice compromise.


Author: Agner Date: 2016-09-28 00:53
Kurt Baumgardner wrote:
I just imagined the possibility of having nearly zero cost functions, with all constants and immediate values already set and ready to go, without any stack usage, or register setup.
If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls.


Author: Hubert Lamontagne Date: 2016-09-28 02:34
* For putting all registers in a cache-like memory block:
This would require a very large register file. For instance, the integer register file on some recent Intel cores has 168 registers. For ForwardCom's 32 registers, you can fit 5 whole contexts in there. But you have to reserve some registers for renamed register values that haven't been retired yet for out-of-order execution to operate on, so you need more than that. For running a modern OS, you'd probably need dozens and dozens of contexts, which would require something as large as a data cache. Ex: 8 bytes per register * 32 registers * 64 contexts = 16kb - that's the size of the whole L1 data cache on some modern CPUs!

More reasonable versions of this already exist though - I think x86 already does this for HyperThreading, with 2 different contexts at the same time. Another case where it's really useful to have just 2 bankswitched sets of registers is context switching between the user program and the OS - for handling page fault exceptions and memory allocations and software TLBs and software emulated instructions and so forth. Often, this involves only bankswitching only a few registers, such as user SP versus system SP (some ARM systems).

* For saving/restoring registers in a cache-like memory block with separate instructions:
It's going to be hard to save/restore all the active registers of a context at the same time because your physical registers will be assigned to an unpredictable subset of registers. Which means that you'd need to have 32 register read/write ports to read/write all your registers the same time. Afaik, register files with that many ports are large and power hungry. And presumably you'd need a special multiplexing mechanism to share those ports with the read/write accesses of normal operation.

You also still has the problem that your cache-like memory block has a finite size, which means that sooner or later the OS will run out and will start to read/write parts of it to main ram anyways - which is what this mechanism was trying to prevent in first place!

In theory you could map this special memory to be backed by RAM which solves size limits, but then all operations that read or write to your special memory become potential RAM operations and have to be issued potentially in-order with other memory operations and compete for data cache access ports, and can potentially trigger interrupts which means that either the CPU has to run those operations in order (with stalls when waiting), or all those operations have to run speculatively and be undoable (which means that the previous state has to be preserved until your operation can be confirmed as having run for real). Even without RAM mapping, you'd probably still need a write buffer.

* For having load multiple/store multiple instructions
Instructions that push/pull a whole bunch of registers on stack at the same time do exist. ARM has one. 68000 has one. It's nice for improving code density. The problem is that it's still an instruction that generates essentially up to 32 memory writes/reads at the same time (plus a stack pointer update). Data caches can't really handle that - they can handle at best 2 reads + 1 write per cycle. The register file can't handle that either. I'm not sure it can run faster than just a simple normal series of read/write operations, plus it would probably have to be implemented as microcode.


Author: Kurt Baumgardner Date: 2016-10-03 21:34
Thank you for the detailed answers Agner and Hubert.

Hubert Lamontagne wrote:
* For putting all registers in a cache-like memory block:
This would require a very large register file....

* For saving/restoring registers in a cache-like memory block with separate instructions:
...(more issues)...
Well, that's a shame. I've been looking at some 80x86 disassembly of a C .exe recently, that contained a lot of function calls. Most of them do a bunch of PUSHes, read some stack vars, followed by the function's actual purpose, and end with the POPs, and finally the return. And, of course that's in the majority of function calls: this constant pattern of register save, fill, use, reload, followed by return (another stack pop). Seems like there should be a better way. Maybe instead of a set for every program/thread, you could have just enough space for a few levels deep of registers. This would correspond with the "call stack". Since most calls come with PUSHes, and most returns are proceeded by POPs, it seems like a prime candidate for optimization, for both speed and code size.

Maybe that's still too complex or slow for hardware, though.

Agner wrote:
If you talk about function calls, and not context switches, then I can almost fulfill your wish. ForwardCom supports a register allocation mechanism where the compiler can see which registers are modified by e.g. a library function. It may use the same registers for temporary variables that do not have to be saved across the function call, and use different registers for variables that need to be saved across the function call. With 32 registers of each type, we will have no register spilling in the critical inner nesting levels. If the program has deeply nested function calls then the critical innermost loops are likely to be in the deeper functions while the outermost nesting levels are unlikely to matter in terms of performance. Return addresses can have their own on-chip stack separate from the data stack so that you have near zero cost function calls.
Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example.


Author: Agner Date: 2016-10-03 23:54
Kurt Baumgardner wrote:
Yes, I can see how this *could* reduce most of that PUSH/POP stuff in function calls. But, can the compiler be smart enough to follow the code flow through those nested calls? (I apologize in advance if I've missed something about ForwardCom that simplifies this task). Most compilers I've experienced simply preserve a function's used registers, in the function - in each function, because the compiler does not anticipate the code flow. And, of course, in many cases, it simply cannot: CALL r15, or even CALL (r15) for example.
The compiler has to obey the function calling convention if the function is publicly visible. The calling convention for 32-bit x86 generates a lot of push and pop. The most efficient calling convention is in 64-bit x86 on Linux. Gnu, Clang and Intel C++ compilers can be quite efficient when the relevant optimization options are on.
See my calling convention manual at www.agner.org/optimize/#manuals


Author: Hubert Lamontagne Date: 2016-10-04 10:57
Kurt Baumgardner wrote:
[...]
Well, that's a shame. I've been looking at some 80x86 disassembly of a C .exe recently, that contained a lot of function calls. Most of them do a bunch of PUSHes, read some stack vars, followed by the function's actual purpose, and end with the POPs, and finally the return. And, of course that's in the majority of function calls: this constant pattern of register save, fill, use, reload, followed by return (another stack pop). Seems like there should be a better way. Maybe instead of a set for every program/thread, you could have just enough space for a few levels deep of registers. This would correspond with the "call stack". Since most calls come with PUSHes, and most returns are proceeded by POPs, it seems like a prime candidate for optimization, for both speed and code size.

Maybe that's still too complex or slow for hardware, though.
Function calls and their associated push/pops are very common, but a large majority of them happen on cold paths:
- Anything that happens on a cold path rarely makes any noticeable speed difference... Even moderately warm paths tend to count a lot less than the hottest paths in overall speed.
- The hottest paths often are loops and tend to either not have any function calls, or have calls that are only occasionally called (error conditions etc), or have calls that can be inlined - which means the compiler can automatically allocate registers and even reorder operations within the parent function, as long as the function is small enough for this to be beneficial.
- Hot loops that do have function calls are often very large and complex loops that are likely to stall for all sorts of other reasons, such as not fitting in the instruction cache, having data cache misses, having pointer-following chains that stall due to data cache latency, having hard-to-predict branches, loading/storing lots of values from objects and so forth, so the function call overhead might not stand out in the wash, or if you're lucky, it could happen while the CPU is waiting after something else (which makes it free).

One option would be register windows like on SPARC, which acts as a hardware assisted stack. Here's an article about that:
http://ieng9.ucsd.edu/~cs30x/sparcstack.html

Another option would be load-dual and store-dual instructions, which let you combine two 64-bit register memory operations into a single 128-bit store (as long as your stack is 128 bit-aligned) and is probably as fast possible for saving/restoring a bunch of registers at the same time (especially if you can dual-issue it to a 2-port data cache).
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Bilinear Interpolation

Post by agner »

Author: Hubert Lamontagne Date: 2016-10-28 17:38
Bonus question: How well does ForwardCom handle bilinear interpolation?

It's the one algo that I can think of that just doesn't fit well in most CPUs, wastes a bunch of cycles twiddling bits around and generating memory addresses and interpolation factors and applying them across RGBA channels (multiplied by the number of texture layers used), so that GPUs have a big advantage doing this one particular thing. It can also show up in other tasks, such as wavetable synthesis (interpolation on the time axis of the waveform + interpolation between waveforms). On most CPUs like x86 or ARM, you can use vectors to do the interpolation part but you still have to do lots of integer scalar ops to generate the memory addresses. And if the U V coordinates are generated per pixel and can't be linearly interpolated, then having to send the interpolation factors over to the vector unit and scramble them into place can easily wipe out potential gains.

Is there a way to do a particularly fast version of this on ForwardCom?


Author: Agner Date: 2016-10-29 00:48
Hubert Lamontagne wrote:
How well does ForwardCom handle bilinear interpolation?
It depends on how your data are organized into vectors/arrays. It requires a good deal of permutation. ForwardCom has good permutation instructions, but so does other vector instruction sets.


Author: Hubert Lamontagne Date: 2016-10-29 15:31
Agner wrote:
Hubert Lamontagne wrote:
How well does ForwardCom handle bilinear interpolation?
It depends on how your data are organized into vectors/arrays. It requires a good deal of permutation. ForwardCom has good permutation instructions, but so does other vector instruction sets.
Ok, how about the most classic texture organization: single texture, 256x256, 32bpp RGBA (so that's 8 bits per component, can be either RGBA or ABGR order), no swizzle. Can that be bilinear-interpolated efficiently?


Author: Agner Date: 2016-10-30 00:42
I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits.


Author: Hubert Lamontagne Date: 2016-10-30 21:47
Agner wrote:
I don't think I understand your problem. You have four RGBA points in each their vector register. All of these should be multiplied by a factor and then it should all be added together. It's just multiplication and addition of 8-bit integers. You may zero-extend all to 16 bits to avoid loss of precision; then shift right and compress back to 8 bits.
In pseudo-ASM it looks somewhat like this:

Code: Select all

loop:

; generate memory addresses
shr r0, u_coord, 24
shr r1, v_coord, 24
shl r1, 8
add r4, r0, r1
add r2, r0, 1
and r2, 255
add r5, r2, r1
add r3, r1, 256
and r3, 65535
add r6, r0, r3
add r7, r2, r3

; load the 4 texels
mov v0, int32 [texture_adr + r4*4]
mov v1, int32 [texture_adr + r4*5]
mov v2, int32 [texture_adr + r4*6]
mov v3, int32 [texture_adr + r4*7]

; expand components from 8bits to 16bits to avoid wrap-around problems
expand.u8.u16 v0
expand.u8.u16 v1
expand.u8.u16 v2
expand.u8.u16 v3

; generate interpolation factors
shr r8, u_coord, 16
and r8, 255
mov broadcast.u16x4 v4, r8
shr r9, v_coord, 16
and r9, 255
mov broadcast.u16x4 v5, r9

; linear interpolate
sub.s16 v1, v0
sub.s16 v3, v2
mul.s16 v1, v4
mul.s16 v3, v4
shr.s16 v1, 8
shr.s16 v3, 8
add.s16 v1, v0
add.s16 v3, v2
sub.s16 v3, v1
mul.s16 v3, v5
shr.s16 v3, 8
add.s16 v3, v1

; alpha blend
mov v6, u32 [screen_ptr]
expand.u8.u16 v6
mov broadcast.s16x4 v7, v3[3]
sub.s16 v3, v6
mul.s16 v3, v7
shr.s16 v3, 8
add.s16 v3, v6
shrink.u16.u8 v6
mov v6, u32 [screen_ptr]

; increment and loop
add screen_ptr, 4
add u_coord, u_delta
add v_coord, v_delta
cmp r10, screen_ptr, span_end_ptr
jz r10 loop

So the problem I'm seeing is not as much the linear interpolation as having to do ~50 instructions per pixel to do your addressing, interpolation, alpha blending and so forth. I guess you could compute multiple pixels together to soften the blow a bit - which I guess would be good for the interpolation and alpha blending parts, although it doesn't help too much for the addressing part.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

ForwardCom version 1.04

Post by agner »

Author: Agner Date: 2016-12-08 03:44
Version 1.04 update today:
  • The instruction formats are made more consistent. Template E2 is modified.
  • The masking principle has been changed. Now there is an option to specify a fallback value to use when a mask bit is zero. The fallback value can be zero, or an input operand, or any other value specified by a register. Register r0 and v0 are allowed as masks to facilitate the use of boolean functions that have return values in r0 or v0.
  • The compare instruction has additional features with comparison of absolute values, and optional extra boolean operands.
  • Conditional jumps are modified.
  • Several other instructions are modified.

Author: Matthias Bentrup Date: 2016-12-12 09:42
As the compare_jump_* instructions cover basically all conditional control-flow cases, I guess that the compare instruction will be used mostly to compute mask registers. So I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself.

Reading the condition code from a register may have some novel use cases, but I think that in 99% of all code (especially in compiled code) the condition code is fixed.


Author: Agner Date: 2016-12-12 11:46
Matthias Bentrup wrote:
I wonder if it would be more economical to restrict RD and RS to 3 bits and so recover 2x2 bits to store the condition code in the instruction itself.
You are right that it would be more compact, but it would break the uniform template system. This would make the hardware more complex and it would also generate an extra complication for the compiler. The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers.


Author: Matthias Bentrup Date: 2016-12-14 03:44
Agner wrote:
[...] The present specification allows you to make a compare with fixed condition in a single-word instruction if the destination register is the same as one of the source registers.
But unless I am mistaken you need two words if you want to compare a register to an immediate value. There a trade-offs in both directions and I don't think that either way is clearly better than the other one.
agner
Site Admin
Posts: 177
Joined: 2017-10-15, 8:07:27
Contact:

Async system calls; horizontal packing instruction

Post by agner »

Author: Joe Duarte Date: 2016-12-14 06:13
Hi Agner. I've been thinking about some interesting research I've read recently, and it raises a couple of questions for me:

1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern.

I ask because this work on async sys calls is very compelling: https://www.usenix.org/legacy/events/os ... Soares.pdf

2. What do you think of the family of horizontal packing operations that the Parabix team pines for? http://parabix.costar.sfu.ca/wiki/ParabixTransform

(See their section headed "Ideal Implementation Using SIMD Packing" on the above-linked page.)

Would it be wise for ForwardCom to offer such instructions? Why or why not?


Finally, it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.


Author: Agner Date: 2016-12-15 00:15
Thank you for your interesting ideas.

Joe Duarte wrote:
1. Does ForwardCom assume synchronous system calls? Could it support a flexible async syscall architecture? I'm not sure where the boundary between an IAS and syscall architecture should lie, but it seems like it could lie wherever you want it to lie – you can choose to fold syscall specifications into an ISA, or to bracket it off as an OS concern.

I ask because this work on async sys calls is very compelling: https://www.usenix.org/legacy/events/os ... Soares.pdf
System calls in ForwardCom are synchronous because they change access rights and memory map. The switch time will be low because both memory maps can be contained inside the CPU. The "FlexSC" batch processing of system calls proposed in the article you are linking to can be implemented in ForwardCom in the following way: First, you make one normal system call which will share a memory area between application and system. After that, you can use this shared memory area to communicate between the application thread and the system thread. This requies that the application thread and the system thread are running in each their core and that these two cores have easy access to the same cache. There is still a problem of synchronization between the two threads. This would probably require speculative synchronization in order to be more efficient than standard system calls. The system thread might be idle most of the time, and this is a waste of resources.
What do you think of the family of horizontal packing operations that the Parabix team pines for? parabix.costar.sfu.ca/wiki/ParabixTransform

(See their section headed "Ideal Implementation Using SIMD Packing" on the above-linked page.)

Would it be wise for ForwardCom to offer such instructions?
That sounds like a good idea. We may implement an instruction that shuffles the bits in each 64-bit block (b0,b1,b2,...,b63) of a vector register into the order (b0,b8,b16,...,b56,b1,b9,b17,...,b57,b2,b10,b18,...,b58,...,...,b7,b15,b23,...,b63). This can be followed by a permute of the bytes into the eight streams. Do you have any good use cases for the parallel bit streams?
Finally, it would be nice to have a detailed comparison of ForwardCom and RISC-V, much of it in the form of comparison tables that get into specific features and decisions. I volunteer to help assemble such a comparison document if you're too busy.
Good idea. We could make a comparison overview and put in on www.forwardcom.info or github.com/forwardcom. It may also include comparisons with x86 and ARM, but it would be too much to include all common instruction sets. I can make a draft and let you comment on it and make additions.
Locked