Did the transmeta processor architecture work by binary translation?
up vote
13
down vote
favorite
Transmeta Corporation produced the Transmeta Crusoe Processor architecture. (Transmeta was also famous for having Linus Torvalds work there at the time.)
We can see from the wikipedia article that the Crusoe processor appears to implements Code Morphing Software. This gives it the ability in theory to execute instructions from other architectures. (eg RISC or Java VM instructions.)
We see a similar pattern in Apple's MacOS Rosetta binary translation software. This enabled software binaries compiled for PowerPC to continue running on the new x86 architecture.
My question is: Did the transmeta processor architecture work by binary translation?
chip binary-translation
add a comment |
up vote
13
down vote
favorite
Transmeta Corporation produced the Transmeta Crusoe Processor architecture. (Transmeta was also famous for having Linus Torvalds work there at the time.)
We can see from the wikipedia article that the Crusoe processor appears to implements Code Morphing Software. This gives it the ability in theory to execute instructions from other architectures. (eg RISC or Java VM instructions.)
We see a similar pattern in Apple's MacOS Rosetta binary translation software. This enabled software binaries compiled for PowerPC to continue running on the new x86 architecture.
My question is: Did the transmeta processor architecture work by binary translation?
chip binary-translation
3
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
1
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59
add a comment |
up vote
13
down vote
favorite
up vote
13
down vote
favorite
Transmeta Corporation produced the Transmeta Crusoe Processor architecture. (Transmeta was also famous for having Linus Torvalds work there at the time.)
We can see from the wikipedia article that the Crusoe processor appears to implements Code Morphing Software. This gives it the ability in theory to execute instructions from other architectures. (eg RISC or Java VM instructions.)
We see a similar pattern in Apple's MacOS Rosetta binary translation software. This enabled software binaries compiled for PowerPC to continue running on the new x86 architecture.
My question is: Did the transmeta processor architecture work by binary translation?
chip binary-translation
Transmeta Corporation produced the Transmeta Crusoe Processor architecture. (Transmeta was also famous for having Linus Torvalds work there at the time.)
We can see from the wikipedia article that the Crusoe processor appears to implements Code Morphing Software. This gives it the ability in theory to execute instructions from other architectures. (eg RISC or Java VM instructions.)
We see a similar pattern in Apple's MacOS Rosetta binary translation software. This enabled software binaries compiled for PowerPC to continue running on the new x86 architecture.
My question is: Did the transmeta processor architecture work by binary translation?
chip binary-translation
chip binary-translation
asked Dec 3 at 6:44
hawkeye
9771515
9771515
3
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
1
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59
add a comment |
3
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
1
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59
3
3
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
1
1
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59
add a comment |
2 Answers
2
active
oldest
votes
up vote
15
down vote
accepted
In the Wikipedia article on Transmeta there's a good example for the Code Morphing process, taken from a PDF document (Wayback archived) with even more details:
The operation of Transmeta's code morphing software is similar to the final optimization pass of a conventional compiler. Considering a fragment of 32-bit x86 code:
add eax,dword ptr [esp] // load data from stack, add to eax
add ebx,dword ptr [esp] // ditto, for ebx
mov esi,[ebp] // load esi from memory
sub ecx,5 // subtract 5 from ecx register
This is first converted simplistically into native instructions:
ld %r30,[%esp] // load from stack, into temporary
add.c %eax,%eax,%r30 // add to %eax, set condition codes.
ld %r31,[%esp]
add.c %ebx,%ebx,%r31
ld %esi,[%ebp]
sub.c %ecx,%ecx,5
The optimizer then eliminates common sub-expressions and unnecessary condition code operations and, potentially, applies other optimizations such as loop unrolling:
ld %r30,[%esp] // load from stack only once
add %eax,%eax,%r30
add %ebx,%ebx,%r30 // reuse data loaded earlier
ld %esi,[%ebp]
sub.c %ecx,%ecx,5 // only this last condition code needed
Finally, the optimizer groups individual instructions ("atoms") into long instruction words ("molecules") for the underlying hardware:
ld %r30,[%esp]; sub.c %ecx,%ecx,5
ld %esi,[%ebp]; add %eax,%eax,%r30; add %ebx,%ebx,%r30
These two VLIW molecules could potentially execute in fewer cycles than the original instructions could on an x86 processor.
So it indeed translates the x86 binary code into the native VLIW binary code. You can call this "binary translation", and it's not an "interpreter", and it's not a "virtual machine" (though this notion is a bit fuzzy; a virtual machine can use various methods to execute actual code, including translating it).
Also note that modern x86 CPUs all use a similar scheme: They translate x86 binary into a more simple, RISC-like code, and then schedule and execute it.
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
|
show 1 more comment
up vote
6
down vote
From a comment:
You could argue those components add up to a virtual machine not a translator.
A virtual machine IS a translator. The virtual ISA is translated to run on the physical ISA. The only real distinction is whether and for how long the translations are saved for reuse.
Any microcoded CPU is a virtual machine, in which every instruction is translated on the fly ("interpreted") every time it is encountered — there is no attempt to reuse the translation.
I once worked on the design of a machine (in the early 1980s) that did the translation (from a zero-address "stack machine" ISA to a three-address RISC ISA) when moving instructions from main memory to the instruction cache. As long as the cache line was not replaced, the translation could be reused.
IIRC, the Transmeta actually writes the translations out to a separate area of main memory, allowing them to persist indefinitely. The translation is done by software, rather than hardware, and as long as the original executable file is not modified, the translation can be reused.
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
15
down vote
accepted
In the Wikipedia article on Transmeta there's a good example for the Code Morphing process, taken from a PDF document (Wayback archived) with even more details:
The operation of Transmeta's code morphing software is similar to the final optimization pass of a conventional compiler. Considering a fragment of 32-bit x86 code:
add eax,dword ptr [esp] // load data from stack, add to eax
add ebx,dword ptr [esp] // ditto, for ebx
mov esi,[ebp] // load esi from memory
sub ecx,5 // subtract 5 from ecx register
This is first converted simplistically into native instructions:
ld %r30,[%esp] // load from stack, into temporary
add.c %eax,%eax,%r30 // add to %eax, set condition codes.
ld %r31,[%esp]
add.c %ebx,%ebx,%r31
ld %esi,[%ebp]
sub.c %ecx,%ecx,5
The optimizer then eliminates common sub-expressions and unnecessary condition code operations and, potentially, applies other optimizations such as loop unrolling:
ld %r30,[%esp] // load from stack only once
add %eax,%eax,%r30
add %ebx,%ebx,%r30 // reuse data loaded earlier
ld %esi,[%ebp]
sub.c %ecx,%ecx,5 // only this last condition code needed
Finally, the optimizer groups individual instructions ("atoms") into long instruction words ("molecules") for the underlying hardware:
ld %r30,[%esp]; sub.c %ecx,%ecx,5
ld %esi,[%ebp]; add %eax,%eax,%r30; add %ebx,%ebx,%r30
These two VLIW molecules could potentially execute in fewer cycles than the original instructions could on an x86 processor.
So it indeed translates the x86 binary code into the native VLIW binary code. You can call this "binary translation", and it's not an "interpreter", and it's not a "virtual machine" (though this notion is a bit fuzzy; a virtual machine can use various methods to execute actual code, including translating it).
Also note that modern x86 CPUs all use a similar scheme: They translate x86 binary into a more simple, RISC-like code, and then schedule and execute it.
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
|
show 1 more comment
up vote
15
down vote
accepted
In the Wikipedia article on Transmeta there's a good example for the Code Morphing process, taken from a PDF document (Wayback archived) with even more details:
The operation of Transmeta's code morphing software is similar to the final optimization pass of a conventional compiler. Considering a fragment of 32-bit x86 code:
add eax,dword ptr [esp] // load data from stack, add to eax
add ebx,dword ptr [esp] // ditto, for ebx
mov esi,[ebp] // load esi from memory
sub ecx,5 // subtract 5 from ecx register
This is first converted simplistically into native instructions:
ld %r30,[%esp] // load from stack, into temporary
add.c %eax,%eax,%r30 // add to %eax, set condition codes.
ld %r31,[%esp]
add.c %ebx,%ebx,%r31
ld %esi,[%ebp]
sub.c %ecx,%ecx,5
The optimizer then eliminates common sub-expressions and unnecessary condition code operations and, potentially, applies other optimizations such as loop unrolling:
ld %r30,[%esp] // load from stack only once
add %eax,%eax,%r30
add %ebx,%ebx,%r30 // reuse data loaded earlier
ld %esi,[%ebp]
sub.c %ecx,%ecx,5 // only this last condition code needed
Finally, the optimizer groups individual instructions ("atoms") into long instruction words ("molecules") for the underlying hardware:
ld %r30,[%esp]; sub.c %ecx,%ecx,5
ld %esi,[%ebp]; add %eax,%eax,%r30; add %ebx,%ebx,%r30
These two VLIW molecules could potentially execute in fewer cycles than the original instructions could on an x86 processor.
So it indeed translates the x86 binary code into the native VLIW binary code. You can call this "binary translation", and it's not an "interpreter", and it's not a "virtual machine" (though this notion is a bit fuzzy; a virtual machine can use various methods to execute actual code, including translating it).
Also note that modern x86 CPUs all use a similar scheme: They translate x86 binary into a more simple, RISC-like code, and then schedule and execute it.
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
|
show 1 more comment
up vote
15
down vote
accepted
up vote
15
down vote
accepted
In the Wikipedia article on Transmeta there's a good example for the Code Morphing process, taken from a PDF document (Wayback archived) with even more details:
The operation of Transmeta's code morphing software is similar to the final optimization pass of a conventional compiler. Considering a fragment of 32-bit x86 code:
add eax,dword ptr [esp] // load data from stack, add to eax
add ebx,dword ptr [esp] // ditto, for ebx
mov esi,[ebp] // load esi from memory
sub ecx,5 // subtract 5 from ecx register
This is first converted simplistically into native instructions:
ld %r30,[%esp] // load from stack, into temporary
add.c %eax,%eax,%r30 // add to %eax, set condition codes.
ld %r31,[%esp]
add.c %ebx,%ebx,%r31
ld %esi,[%ebp]
sub.c %ecx,%ecx,5
The optimizer then eliminates common sub-expressions and unnecessary condition code operations and, potentially, applies other optimizations such as loop unrolling:
ld %r30,[%esp] // load from stack only once
add %eax,%eax,%r30
add %ebx,%ebx,%r30 // reuse data loaded earlier
ld %esi,[%ebp]
sub.c %ecx,%ecx,5 // only this last condition code needed
Finally, the optimizer groups individual instructions ("atoms") into long instruction words ("molecules") for the underlying hardware:
ld %r30,[%esp]; sub.c %ecx,%ecx,5
ld %esi,[%ebp]; add %eax,%eax,%r30; add %ebx,%ebx,%r30
These two VLIW molecules could potentially execute in fewer cycles than the original instructions could on an x86 processor.
So it indeed translates the x86 binary code into the native VLIW binary code. You can call this "binary translation", and it's not an "interpreter", and it's not a "virtual machine" (though this notion is a bit fuzzy; a virtual machine can use various methods to execute actual code, including translating it).
Also note that modern x86 CPUs all use a similar scheme: They translate x86 binary into a more simple, RISC-like code, and then schedule and execute it.
In the Wikipedia article on Transmeta there's a good example for the Code Morphing process, taken from a PDF document (Wayback archived) with even more details:
The operation of Transmeta's code morphing software is similar to the final optimization pass of a conventional compiler. Considering a fragment of 32-bit x86 code:
add eax,dword ptr [esp] // load data from stack, add to eax
add ebx,dword ptr [esp] // ditto, for ebx
mov esi,[ebp] // load esi from memory
sub ecx,5 // subtract 5 from ecx register
This is first converted simplistically into native instructions:
ld %r30,[%esp] // load from stack, into temporary
add.c %eax,%eax,%r30 // add to %eax, set condition codes.
ld %r31,[%esp]
add.c %ebx,%ebx,%r31
ld %esi,[%ebp]
sub.c %ecx,%ecx,5
The optimizer then eliminates common sub-expressions and unnecessary condition code operations and, potentially, applies other optimizations such as loop unrolling:
ld %r30,[%esp] // load from stack only once
add %eax,%eax,%r30
add %ebx,%ebx,%r30 // reuse data loaded earlier
ld %esi,[%ebp]
sub.c %ecx,%ecx,5 // only this last condition code needed
Finally, the optimizer groups individual instructions ("atoms") into long instruction words ("molecules") for the underlying hardware:
ld %r30,[%esp]; sub.c %ecx,%ecx,5
ld %esi,[%ebp]; add %eax,%eax,%r30; add %ebx,%ebx,%r30
These two VLIW molecules could potentially execute in fewer cycles than the original instructions could on an x86 processor.
So it indeed translates the x86 binary code into the native VLIW binary code. You can call this "binary translation", and it's not an "interpreter", and it's not a "virtual machine" (though this notion is a bit fuzzy; a virtual machine can use various methods to execute actual code, including translating it).
Also note that modern x86 CPUs all use a similar scheme: They translate x86 binary into a more simple, RISC-like code, and then schedule and execute it.
edited 2 days ago
Alexis King
1034
1034
answered Dec 3 at 10:14
dirkt
8,88812447
8,88812447
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
|
show 1 more comment
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
2
2
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
It's not just "modern" x86 processors which do such translation; the contemporaries of the Crusoe already did it.
– MSalters
Dec 3 at 11:48
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
@MSalters: Which contemporaries do you mean exactly? "Modern" x86 CPU to quite a bit of translating and rescheduling, just like the optimization on the Transmeta shown above. I'm not aware other CPUs did that at the time of the Crusoe, so if any did, I'd be extremely curious.
– dirkt
Dec 3 at 11:50
4
4
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
AMD's K6 and Athlon, as well as Intel's Pentium 2 and 3 all translated x86 into a RISC-like internal representation. Rescheduling or Out-of-Order execution came with the original Pentium.
– MSalters
Dec 3 at 11:59
1
1
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
@MSalters Sure, they (in fact the K5 before) do translate a single X86 into one or more (up to 4 for K5) internal opcodes, and may rechedule them. But this is a strict linear process within the CPU to better utilize seperate function unit. The code gets neither optimised, nor rearaged (beside using different function units) and most of all, not writen back into memory to be executed from there in all subsequent execution. So far, not even modern CPUs do so - caching the intermediate format is the maximum that is done.
– Raffzahn
Dec 3 at 17:49
2
2
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
@MSalters The Pentium (P5) CPUs were superscaler (two pipelines) but in-order. The Pentium Pro (P6) CPUs were the first out-of-order x86 CPUs.
– Ross Ridge
Dec 3 at 21:13
|
show 1 more comment
up vote
6
down vote
From a comment:
You could argue those components add up to a virtual machine not a translator.
A virtual machine IS a translator. The virtual ISA is translated to run on the physical ISA. The only real distinction is whether and for how long the translations are saved for reuse.
Any microcoded CPU is a virtual machine, in which every instruction is translated on the fly ("interpreted") every time it is encountered — there is no attempt to reuse the translation.
I once worked on the design of a machine (in the early 1980s) that did the translation (from a zero-address "stack machine" ISA to a three-address RISC ISA) when moving instructions from main memory to the instruction cache. As long as the cache line was not replaced, the translation could be reused.
IIRC, the Transmeta actually writes the translations out to a separate area of main memory, allowing them to persist indefinitely. The translation is done by software, rather than hardware, and as long as the original executable file is not modified, the translation can be reused.
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
add a comment |
up vote
6
down vote
From a comment:
You could argue those components add up to a virtual machine not a translator.
A virtual machine IS a translator. The virtual ISA is translated to run on the physical ISA. The only real distinction is whether and for how long the translations are saved for reuse.
Any microcoded CPU is a virtual machine, in which every instruction is translated on the fly ("interpreted") every time it is encountered — there is no attempt to reuse the translation.
I once worked on the design of a machine (in the early 1980s) that did the translation (from a zero-address "stack machine" ISA to a three-address RISC ISA) when moving instructions from main memory to the instruction cache. As long as the cache line was not replaced, the translation could be reused.
IIRC, the Transmeta actually writes the translations out to a separate area of main memory, allowing them to persist indefinitely. The translation is done by software, rather than hardware, and as long as the original executable file is not modified, the translation can be reused.
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
add a comment |
up vote
6
down vote
up vote
6
down vote
From a comment:
You could argue those components add up to a virtual machine not a translator.
A virtual machine IS a translator. The virtual ISA is translated to run on the physical ISA. The only real distinction is whether and for how long the translations are saved for reuse.
Any microcoded CPU is a virtual machine, in which every instruction is translated on the fly ("interpreted") every time it is encountered — there is no attempt to reuse the translation.
I once worked on the design of a machine (in the early 1980s) that did the translation (from a zero-address "stack machine" ISA to a three-address RISC ISA) when moving instructions from main memory to the instruction cache. As long as the cache line was not replaced, the translation could be reused.
IIRC, the Transmeta actually writes the translations out to a separate area of main memory, allowing them to persist indefinitely. The translation is done by software, rather than hardware, and as long as the original executable file is not modified, the translation can be reused.
From a comment:
You could argue those components add up to a virtual machine not a translator.
A virtual machine IS a translator. The virtual ISA is translated to run on the physical ISA. The only real distinction is whether and for how long the translations are saved for reuse.
Any microcoded CPU is a virtual machine, in which every instruction is translated on the fly ("interpreted") every time it is encountered — there is no attempt to reuse the translation.
I once worked on the design of a machine (in the early 1980s) that did the translation (from a zero-address "stack machine" ISA to a three-address RISC ISA) when moving instructions from main memory to the instruction cache. As long as the cache line was not replaced, the translation could be reused.
IIRC, the Transmeta actually writes the translations out to a separate area of main memory, allowing them to persist indefinitely. The translation is done by software, rather than hardware, and as long as the original executable file is not modified, the translation can be reused.
edited Dec 3 at 13:54
answered Dec 3 at 12:56
Dave Tweed
880310
880310
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
add a comment |
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
Agreed. I would argue that code execution (interpretation / binary translation / native code virtualization) is just one property of a VM. Typical VMs also define memory, manage threads, control access to storage, and so on, so BT is not in and of itself enough to qualify something as a virtual machine. FWIW, the two engineers who wrote the original just-in-time compiler for Android's Dalvik VM previously worked at Transmeta.
– fadden
Dec 3 at 15:59
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f8446%2fdid-the-transmeta-processor-architecture-work-by-binary-translation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
Started reading Wikipedia article you refer to, and see: "Code Morphing Software consisted of an interpreter, a runtime system and a dynamic binary translator.". Isn't it an answer to your question?
– Anonymous
Dec 3 at 7:29
1
You could argue those components add up to a virtual machine not a translator. This is why I asked the question.
– hawkeye
Dec 3 at 7:59