Such an operation needs only the postincrement addressing mode described above. Furthermore, reading the elements of a string or array is more common than writing the elements. Indeed, there are many standard library functions that perform no writing at all e. Therefore, if you have a limited number of addressing modes in your instruction set design, the most useful addressing mode would be a read that postincrements.
This results in not only the most useful string and array operations, but also a POP instruction that grows the stack downward. The second-most-useful addressing mode would then be a post-decrement write , which can be used for the matching PUSH instruction.
Indeed, the PDP had postincrement and predecrement addressing modes, which produced a downward-growing stack. Even the VAX did not have preincrement or postdecrement. One advantage of descending stack growth in a minimal embedded system is that a single chunk of RAM can be redundantly mapped into both page O and page 1, allowing zero page variables to be assigned starting at 0x and the stack growing downwards from 0x1FF, maximizing the amount it would have to grow before overwriting variables.
By comparison, the minimal embedded system of that time based on an or would be four or five chips. Stack Overflow for Teams — Collaborate and share knowledge with a private group.
Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. Why do stacks typically grow downwards? Ask Question. Asked 11 years, 10 months ago. Active 9 months ago. Viewed 26k times. Improve this question. Ben Zotto Ben Zotto 68k 23 23 gold badges silver badges bronze badges. Add a comment. Active Oldest Votes. Improve this answer. Community Bot 1 1 1 silver badge. I like the Z80 RAM detection strategy story. Makes some sense that text segments are laid out growing upwards-- programmers of yore had somewhat more direct contact with dealing with the implications of that than the stack.
Thanks paxdiablo. The pointer to the set of alternative forms of stack implementations is also super interesting. Doesn't early-day memory have a way to notify its size and we have to calculate it manually? I still remember the TRS model 3 method for getting the date and time being to ask the user for it at boot time.
Having a memory scanner to set the upper limit of memory was considered state of the art back in the day :- Can you imagine what would happen if Windows asked the time, or how much memory you had, every time you booted? Indeed, the Zilog Z80 documentation says the part starts up by setting the PC register to h and executing. It sets the interrupt mode to 0, disables interrupts, and sets the I and R registers to 0, too. After that, it starts executing. At h, it starts running code.
THAT code has to initialize the stack pointer before it can call a subroutine or enable interrupts. What vendor sells a Z80 that behaves the way you describe?
Mike, sorry, I should have been clearer. It was actually controlled from a program in ROM. I'll clarify. It seems to me like buffer overflows would be a lot harder to exploit if the stack grew upward I believe it comes from the very early days of computing, when memory was very limited, and it was not wise to pre-allocate a large chunk of memory for exclusive use by the stack. So, by allocating heap memory from address zero upwards, and stack memory from the end of the memory downwards, you could have both the heap and the stack share the same area of memory.
If you needed a bit more heap, you could be careful with your stack usage; if you needed more stack, you could try to free some heap memory. The result was, of course, mostly, spectacular crashes, as the stack would occasionally overwrite the heap and vice versa.
Back in those days there were no interwebz, so there was no issue of buffer overrun exploitations. Or at least to the extent that the interwebz existed, it was all within high security facilities of the united states department of defense, so the possibility of malicious data did not need to be given much thought. After that, with most architectures it was all a matter of maintaining compatibility with previous versions of the same architecture.
That's why upside-down stacks are still with us today. Some hardware has the heap starting at high memory, growing down, while the stack starts at low memory growing up. Third, stacks on the Multics processors grew in the positive direction, rather than the negative direction. This meant that if you actually accomplished a buffer overflow, you would be overwriting unused stack frames, rather than your own return pointer, making exploitation much more difficult.
That's a rather interesting statement. Did buffer overflows become such a huge problem only because of the "customary" procedure-call-stack-frame arrangement? Also, how much of Multics' reputation as Totally Invulnerable was just a fluke of hardware design? Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.
Create a free Team What is Teams? Learn more. Call instruction pushes the return address and further inside the function compiler pushes BP. Compiler will put local after this. Now as stack grows in a perticuler direction so after each and every inner function call, address of the local would be either increasing or decreasing position. If we compare the address values we would definitely get the direction details.
Here is a small C code example. Here we are passing the address of a local of an outer function to an inner function call. Now as we called a function stack has grown.
Next is compare the address of two locals. About our authors : Team EQA. You have viewed 1 page out of Your C learning is 0.
0コメント