flat assembler
Message board for the users of flat assembler.

Index > Heap > Why is CPU design hard? And MMU plus other random ramblings

Goto page Previous  1, 2, 3, 4  Next
Author
Thread Post new topic Reply to topic
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
f0dder wrote:
lazer1 wrote:
the hardware should be fast with normal unoptimised asm. if it isnt it will fail, because most people are too lazy to bother with complicated optimisations.
Keep in mind that most running machine code is generated by compilers,


yes, and that is a problem with creating a new cpu which is very complicated
as people will tend to just port existing compilers to the new cpu.

the laziness applies to the compiler, especially gcc which is
severely complicated.

Quote:

You're still right that a lot of x86 instructions aren't widely used, though, and that the architecture could be better Smile


the fact that it is based on a RISC core means Intel and AMD
dont believe in CISC,

I just wish they would make the RISC core directly available to
programmers.

Quote:

Personally I don't think you should strive for a minimal instruction set, but a comfortable one, without useless instructions. LOOP is a nice instruction in theory, but when it takes longer than "dec reg / jnz label", it's useless - I don't see why it can't be broken into u-ops that are at least as effective, though.


its the holiday suitcase problem. if you go on holiday you can only
take 20kg on a plane. Thus you cannot take your entire house with
you!

each item you take prevents you bringing back things you bought
on the holiday.

thus you need to think carefully about every item. Each item is
extra weight to carry. you could go on holiday just taking a
credit card and passport. You buy the extra clothes that you need.

its the same for CPUs, each feature needs to be designed, implemented,
maintained, tested. If the documentation is 1000 pages you are
weighing down the whole development process.

Quote:

And striving for 1-instruction-1-cycle seems silly, it would probably either mean no SIMD instructions, or having other instructions run artificially slow.


that was the early days of RISC, later on they enhanced things
by allowing multicycle operations. multicycle RISC is an EVOLUTION
of single cycle RISC.

I think speculative execution probably has superceded RISC as an
idea. RISC itself was a form of speculative execution but where
they staggered the CPU components.

pre RISC instruction execution:

fetch instruction 100
THEN decode opcode 100
THEN fetch operands 100
THEN combine operands 100
THEN store operands. 100

RISC:

fetch instruction 100
AT SAME TIME decode opcode 99
AT SAME TIME fetch operands 98
AT SAME TIME combine operands 97
AT SAME TIME store operands 96

by staggering, they changed say 5 serial phases into
5 parallel phases. resulting in 5 x the speed,

that is the idea of a factory assembly line, where
the items are moved along a conveyor belt and
people are working on all phases simultaneously.

similar to the idea of "division of labour"
from the early industrial revolution.


in fact I think it was 4 x the speed, the phases were
probably slightly different from the above description.

Speculative execution
goes much further by having multiple copies of each component.

once you have speculative execution you dont have to have RISC.

another useful idea with RISC is instruction bitfield orthogonality,

all instructions are say 32 bits and the bitfields are always the same

thus they can be read in parallel. I think thats why it is 4 x the speed

as op decode and operand fetch can be done in parallel ie are
now one phase.

opcode is now orthogonal to operands which means the
operands can be fetched WITHOUT LOOKING AT THE OPCODE!


Quote:

x86-64 is nice, but there's still too much x86 legacy - both in the CPU, but also the rest of the PC platform. Would be nice throwing it all aside, and doing backward compatibility with emulation, but that's just not going to happen. Itanium had backward compatibility, but it didn't go through (pricing, marketing, lack of performance, etc.)


Itanium failed for strategic reasons, its NOT ENOUGH to just create
a new CPU. you need to create:

1. cpu
2. mobo
3. OS
4. developer software eg text editor, compiler,
5. developer liaison to create some initial software for the
product launch
6. vendor liaison to sell the damn computer

examples: Atari ST, Amiga 500, Apple Mac, Amstrad, ...

when they launched the Amiga 500 it had literally 50 different
apps and games. It was an evolution of the Amiga 1000 which
was not aimed at the unwashed masses

the Amiga was THE ULTIMATE desktop and is still available in
emulated form as "Amiga Forever" above Windows.

with the Amiga they designed the entire mobo including
the gfx and audio hardware, AND they designed the entire
OS a pre-emptive multitasking desktop OS in 1986.

they designed both the h/w and OS and the 2 were totally
integrated. The CPU was standard namely the 68000
and eg the floppy drives were standard but they used
a nonstandard disk format: it was cylinder based instead
of sector based which meant 880K instead of 720K.

ie they got an extra 160K by using less synch data.
The Amiga would read in an entire cylinder, then search
for the synch to find where the cylinder started.

Quote:

Also, keep in mind that there's a lot of different kinds of "servers", and that Itanium wasn't just created for servers, but also workstations. You know, big and complicated number-crunching stuff, not just simple serving of web content.


but workstations are also low volume, mainly because they cost
the same as a car.

the other problem is that workstations tend to use Unix mainly because
they are too much effort to develop for: ie they go for a generic system. And Unix is based entirely on porting, first thing they port is the C compiler. That means the workstation is likely just to get generic optimisation.

I think nowadays people dont use workstations because todays
desktop machines ARE workstations, if you want more power
you get a mainframe or you network desktop machines.

networking of desktop machines itself also superceded
mainframes in the early 1990's as its usually much faster for the user.
At uni at lunch time the terminals to the mainframes would freeze
up as all the students would be trying to access their emails
across the ethernet.


the fall of IBM is the point at which desktop networks
superceded mainframes which must be around 1993 or 1992.

up till then IBM was THE BIGGEST COMPANY ON EARTH,
but Microsoft overtook to become the biggest. Today
Bill Gates is no longer the richest man on earth,
the richest today is I think called Carlos Slim
who made his money from mobile phones.



desktop network companies at that time were advertising
that its much cheaper to network PCs than to get a mainframe.


mainframes are still used but only where desktop machines
are unsuitable namely the "number crunching" which you
mentioned eg weather forecasting or satellite image processing.
Most usage of computers DOESNT REQUIRE
that level of power. Most people just use computers to
surf the net to buy or chat and to send emails
and print their photos. you can do all that with a Pentium Razz

today PCs + internet servers have overtaken networked PCs.
We can in this forum discuss things with people anywhere on
the planet, with a PC network you are stuck to the employees
of some company.

in fact today you can work from home and submit your work
via internet servers. Save money and time by not
commuting to work. There was some radio station where
sometimes they would broadcast from home,
they used modern quality phone lines to transmit their
voice to the transmitter.



the problem with a workstation is that only 1 person can use all
the power.

for a company that is very expensive, if they have just 1 workstation
just 1 person can be working on it at any one time.

with mainframes the power is distributed between all the employees.

I think that is why they invented multitasking, in the early days
a university would have a mainframe. The program was fed
in by punched paper tape. The astronomer would have access
from 1pm to 3pm each day, the nuclear physicist 3pm to 5pm,

unfortunately it was too exclusive. Too expensive to fund
the processing needs of a dozen academics. So they instead found a way
for the mainframe to work on multiple things simultaneously
and users would access the mainframe via terminals.

that way the whole university could use the mainframe, anytime,
even (god forbid) students. Mad


eventually Unix became the most sophisticated way to do this,
Unix is just a glob of the best ideas people found for
very expensive systems.

instead of buying a printer for every terminal, a university
can buy 3 top quality printers, powered by a printer server
and all print requests are routed to the server which
then routes the prints to the 3 printers. The printers are
used continuously, each time they break down a man in a
van appears and repairs the printer. when the printer
becomes unrepairable the man in the van throws
the printer in the skip, takes a brand new printer out
of the van connects it up, the drivers are changed by the
sysop and a bill is sent to the accounts department.

if you didnt use Unix you had to use some horrendous system
from IBM. Today IBM just uses Unix for their mainframes,
but in the early days IBM created their own system which
was really dreadful,
Post 26 Aug 2008, 13:26
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
lazer1: I'm not sure about what you want. Do you want Intel to design a new instruction set that is only RISC in nature? If so, then it won't be x86 anymore and no existing code will run on it. Why not just use ARM since that is already close to what I think you are describing? At least ARM already has a large user base.

Remember I mentioned the trade-offs, this is one of them. Without an existing user and code base it is very difficult to introduce new architectures. That was why Itanium was mentioned; to show how difficult it is to introduce change. It really doesn't matter how much better technically a product is, the sad fact is that change is hard for many to accept. AMD make a smart move with x86-64, it was an easy path for people to accept.

I think your idea about servers being simpler than desktops is not correct. There is a lot more technology that goes into servers. They are definitely more complex. Hight speed RAM, with ECC. Fast HDDs. Multiple networks interfaces. Multi core CPUs, multi-CPU mother boards. Redundant PSUs, etc.
Post 26 Aug 2008, 18:24
View user's profile Send private message Visit poster's website Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
revolution wrote:
lazer1: I'm not sure about what you want. Do you want Intel to design a new instruction set that is only RISC in nature?


I dont think Intel can create an asm that I would like!

I think their talents are at the engineering level not at the
asm level.

only programmers know what asm is good,
and if you try to email AMD you CANNOT contact
the CPU developers.

x86 has very good supervisor level of the CPUs
eg Intel's APIC and multiprocessor architecture is
very good. thats because it is at the engineering
level. Their transparent CPU cache coherency is great.

with the Amiga they tried dual 68k + PPC boards
and had a lot of problems as cache coherency
had to be done in software. With Intel MP
the cache coherency is done in hardware automatically.

also with x86 all the MMU tabling can be fully cached.

as you move to the lower levels Intel is VERY GOOD


once you reach the user level their asm isnt
so good. Their prefices are a horrible feature of
the asm. with many RISCS the CPU can just
read in 32 bits and immediately read off
the different fields IN PARALLEL

their segment registers are a dreadful idea,

the TSS idea is totally pointless: multitasking
is a software problem its not a hardware problem.

thankfully AMD with long mode have REMOVED
as much as they can of Intel's pointless ideas.

Other CPUs have been using MMUs as the ONLY
way to do protection for a LONG TIME. The
Amiga 3000 with the 68030 MMU had Unix
way back in 1992, 16 years ago.

AMD have seen the light and long mode doesnt
use segment registers: try one and the machine
will CRASH. fs and gs are NOT SEGMENT registers
in long mode but are BASE REGISTERS.

a base register is just a register which you ADD,
whereas a segment register is an index into a
table which has protection info.

the funniest thing is that with all of Intels protection
segments Windows XP is the most virus infested
OS in the UNIVERSE!

and Unix is the safest and only relies on MMU protection.

Quote:

If so, then it won't be x86 anymore and no existing code will run on it.


yes but SSE isnt x86 either it is completely new things which
coexists with x86. All you need is to assign one of the unused
rflags: bits 22 to 63 are unused which is FORTY TWO unused flags.

call the flag: lazer1_new_asm

if that flag is set you have a completely new asm, with completely
new registers.

you then set that flag and you have a brand new world.

existing code will run by clearing the flag, and setting the
flag is a serializing op.

the new asm can be say 32 bit instructions, with 2 operands,

Quote:

Why not just use ARM since that is already close to what I think you are describing? At least ARM already has a large user base.


ISTR that ARM is either not fast enough or not cheap enough versus
x86.

Quote:

Remember I mentioned the trade-offs, this is one of them. Without an existing user and code base it is very difficult to introduce new architectures. That was why Itanium was mentioned; to show how difficult it is to introduce change. It really doesn't matter how much better technically a product is, the sad fact is that change is hard for many to accept. AMD make a smart move with x86-64, it was an easy path for people to accept.


people will accept change if the new machine is much faster much cheaper
and has a good browser + email,

the Itanium was not aimed at the general public, if you
are a web hosting company you arent going to risk some
very expensive new h/w, you will wait for the other companies
to accept it first.

Quote:

I think your idea about servers being simpler than desktops is not correct. There is a lot more technology that goes into servers. They are definitely more complex. Hight speed RAM, with ECC. Fast HDDs. Multiple networks interfaces. Multi core CPUs, multi-CPU mother boards. Redundant PSUs, etc.


the hardware is much more complex, however the server company
I deal with uses FreeBSD,

I tried out FreeBSD and it is much worse than Linux, in fact
it uses Linux drivers for various things.

all the server does is respond to browser requests:

web servers are OS independent, a browser can be from
any OS or CPU. It sends some ascii to the server,

and the server responds to the ascii with some other ascii.

eg right now on this forum this post is a "URL"

the flatassembler.net server receives that URL and runs a
php script. The php script sends the browser a lot of ascii,
Internet Explorer INTERPRETS the ascii to generate a text
editor with our friends Very Happy Razz Laughing Surprised

the server doesnt even need any graphics, it just
needs various scripting languages such as php and perl.

and those can be ported.

a server is just files + scripts,

very simple compared to a full blown desktop OS.

a server doesnt even need a browser!

(that can be done remotely from XP or Linux)

it probably doesnt even need to be multitasking

as it can service one request fully at a time
and just keep people waiting.

eg when I make a post here I could wait
20 seconds before the post is processed.

server companies limit the number of
simultaneous connections,

and the simultaneous connections can
be processed one at a time. if the server
is super fast then it wont make a lot of
difference if it is multitasking or not.


Last edited by lazer1 on 26 Aug 2008, 20:11; edited 1 time in total
Post 26 Aug 2008, 19:56
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
lazer1 wrote:
I think [Intel's] talents are at the engineering level not at the asm level.
I think you seriously underestimate the engineers at Intel.
lazer1 wrote:
the funniest thing is that with all of Intels protection
segments Windows XP is the most virus infested
OS in the UNIVERSE!
Is that a CPU failure or an OS failure? My money is on the OS being at fault.
lazer1 wrote:
All you need is to assign one of the unused
rflags: bits 22 to 63 are unused which is FORTY TWO unused flags.

call the flag: lazer1_new_asm

if that flag is set you have a completely new asm, with completely
new registers.

you then set that flag and you have a brand new world.
It seems you don't fully understand how CPU design works.
lazer1 wrote:
ISTR that ARM is either not fast enough or not cheap enough versus
x86.
Why do you think that is? Is it a technical reason or a marketing related reason?
Post 26 Aug 2008, 20:09
View user's profile Send private message Visit poster's website Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
LocoDelAssembly
Quote:
the funniest thing is that with all of Intels protection
segments Windows XP is the most virus infested
OS in the UNIVERSE!

and Unix is the safest and only relies on MMU protection.


Can you develop this a little more? AFAIK WinXP and Linux both use paging without relaying in the segment protections (that is the reason for why the stack is executable, because all the address space which is readable also is executable and since CS spans the entire 4 GB space instead of using more fine-grained segmentation then every readable page is executable).
Post 26 Aug 2008, 20:15
View user's profile Send private message Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
revolution wrote:
lazer1 wrote:
I think [Intel's] talents are at the engineering level not at the asm level.
I think you seriously underestimate the engineers at Intel.


they are the guys who invented the segment registers which
are FULLY OBSOLETE with the launch of Vista.

and which 68000 and MIPS and PPC never ever used,

and programmers all immediately set up flat addressing with
segment registers. They clearly dont listen to any programmers
or they would have known NOBODY WANTS segment registers.

they are the guys who created prefices when all that hack and patch
can be done away with using a mere new rflags bit.

they are the guys who created an architecture where the
supervisor and user states share the same MMU making
it tricky to have orthogonal memory spaces.

to switch memory spaces you need a runway region which
is in both spaces.


with the Motorola 68030 the supervisor state has its own MMU
different from the user state.

to switch memory spaces you switch to supervisor state,

change the user MMU, then switch back, no runway needed.


Quote:

lazer1 wrote:
the funniest thing is that with all of Intels protection
segments Windows XP is the most virus infested
OS in the UNIVERSE!
Is that a CPU failure or an OS failure? My money is on the OS being at fault.


correct, its the OS failure.

You can make x86 fully secure, its just that the whole Windows
design is fundamentally insecure.

Linux on the same hardware is secure.

Quote:

lazer1 wrote:
All you need is to assign one of the unused
rflags: bits 22 to 63 are unused which is FORTY TWO unused flags.

call the flag: lazer1_new_asm

if that flag is set you have a completely new asm, with completely
new registers.

you then set that flag and you have a brand new world.
It seems you don't fully understand how CPU design works.


they use such flags already to switch between various CPU modes.

x86 is all emulated anyway so its just a software problem:

the RISC core x86 emulator just does:

if( rflags & lazer1_new_asm_bit )
{
lazer1_new_asm_emulate() ;
}
else
{
long_mode_emulate() ;
}

the x86 architecture is a software app above a RISC core.

Quote:

lazer1 wrote:
ISTR that ARM is either not fast enough or not cheap enough versus
x86.
Why do you think that is? Is it a technical reason or a marketing related reason?


its the aggressive marketing contracts by Intel and Microsoft.

you can only be successful with products which dont compete
as MS and Intel contracts with vendors forbid marketing
competitors.

Dell was forced to use Intel CPUs. Intel's contract forbade them
to use eg AMD cpus otherwise no more Intel CPUs.

Dell took Intel to court and WON and now Dell machines
will sometimes use Intel and sometimes use AMD
whichever is cheaper at the time.

as a result of winning this case Dell got featured on the AMD site.

The Amiga platform got sunk by MS contracts, the Amiga got
bought up by Gateway who make PCs. And Gateways contract
with MS forbids them to develop x86 AmigaOS. Gateway
did everything to crush x86 AmigaOS. But AROS have successfully
created unofficial x86 AmigaOS. (www.aros.org)

There is also Amiga Forever which emulates 68k AmigaOS on Windows,
that exists because it was done before Gateway took control.
It was sanctioned by the predecessors of Gateways subsidiary
Amiga Inc.
Post 26 Aug 2008, 20:35
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
lazer1: Where are you going with your posts?

You seem to be saying that the 68xxx architecture is very nice, so can I suggest that you just go and buy an old apple machine with the Motorola CPU? Then your problem is solved, you have the CPU design you want.

Now why is it that the user base did not all run out and buy Apple PCs? If they are technically superior then people should be snapping then up without delay. I think I mentioned above (more than once) that being technically superior is not the path to success in the CPU or OS world.
Post 26 Aug 2008, 20:54
View user's profile Send private message Visit poster's website Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
Quote:
You can make x86 fully secure, its just that the whole Windows
design is fundamentally insecure.

Linux on the same hardware is secure.
Bullshit.

NT has more fine-grained security than linux and other unix-like systems. Security holes are almost always in usermode software, not the kernel. And as for "UNIX is secure!", just look at how many exploits BIND has had.

Segment registers sorta made sense in the 16bit days, and intel had to keep them for compatibility. Yes, they should have dumped a lot of stuff and done a redesign with the 80386, but they didn't - and there's not much we can do about it now.

Amiga got fucked sideways by bad management. Also, while the OS had some cool technical stuff going for it, the consumer 68k machines didn't have memory protection, and as a whole it wasn't a very fulfilling experience (the games rocked, though).

It seems you have misunderstood a bunch of things, and only have a superficial knowledge about other Smile
Post 27 Aug 2008, 14:44
View user's profile Send private message Visit poster's website Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
LocoDelAssembly wrote:
Quote:
the funniest thing is that with all of Intels protection
segments Windows XP is the most virus infested
OS in the UNIVERSE!

and Unix is the safest and only relies on MMU protection.


Can you develop this a little more? AFAIK WinXP and Linux both use paging without relaying in the segment protections (that is the reason for why the stack is executable, because all the address space which is readable also is executable and since CS spans the entire 4 GB space instead of using more fine-grained segmentation then every readable page is executable).


the proof of the pudding is in the eating,

HUNDREDS of viruses, dozens of spyware at any one time.

MMUs have been the STANDARD way to do security for more than a decade, but you need the entire OS design to be "secure".

segments are too complicated.
Post 27 Aug 2008, 23:58
View user's profile Send private message Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
revolution wrote:
lazer1: Where are you going with your posts?

You seem to be saying that the 68xxx architecture is very nice, so can I suggest that you just go and buy an old apple machine with the Motorola CPU? Then your problem is solved, you have the CPU design you want.


I have no interest in Apple, and I use the Amiga 68k emulator
WinUAE ALL THE TIME above XP. eg I do my fasm asm above
WinUAE and then switch over to XP to use fasm.

so I do use 68k but in emulated form.

But 68k was more or less discontinued quite a long time ago.

Quote:

Now why is it that the user base did not all run out and buy Apple PCs? If they are technically superior then people should be snapping then up without delay. I think I mentioned above (more than once) that being technically superior is not the path to success in the CPU or OS world.


I dont have any interest in Apple!

Apples are mainly status symbols, I dont think there is anything
at all interesting about them other than that they do have
professional software.

nice software shame about the machine!
Post 28 Aug 2008, 00:02
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
lazer1 wrote:
I dont have any interest in Apple!

Apples are mainly status symbols, I dont think there is anything
at all interesting about them other than that they do have
professional software.

nice software shame about the machine!
I'm glad to see that you agree that marketing has the most influence over what people buy.
Post 28 Aug 2008, 00:14
View user's profile Send private message Visit poster's website Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
LocoDelAssembly
Quote:

MMUs have been the STANDARD way to do security for more than a decade, but you need the entire OS design to be "secure".


MMU = Memory Management Unit Question
Post 28 Aug 2008, 00:58
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
LocoDelAssembly wrote:
MMU = Memory Management Unit Question
Yes, have you not had the opportunity to work with one yet?
Post 28 Aug 2008, 01:03
View user's profile Send private message Visit poster's website Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
LocoDelAssembly
I was expecting the answer from lazer1 because I'm trying to understand what he is trying to say about Windows, he means that my WinXP don't use the Memory Management Unit for memory protection while Unix does? (now even you can reply this question if you want to Wink)
Post 28 Aug 2008, 01:11
View user's profile Send private message Reply with quote
revolution
When all else fails, read the source


Joined: 24 Aug 2004
Posts: 17449
Location: In your JS exploiting you and your system
revolution
Perhaps we can assume that lazer1 meant something other than the paging mechanism of the MMU. Although I can't imagine what that would be, we already know the segmentation unit is effectively bypassed.
Post 28 Aug 2008, 01:14
View user's profile Send private message Visit poster's website Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
f0dder wrote:
Quote:
You can make x86 fully secure, its just that the whole Windows
design is fundamentally insecure.

Linux on the same hardware is secure.
Bullshit.

NT has more fine-grained security than linux and other unix-like systems. Security holes are almost always in usermode software, not the kernel.


XP and Vista are both derived from NT and they have HUNDREDS of
viruses. whether they are at user or kernel level doesnt help
much if I have to spend 5 hours doing virus scans for the damn
viruses.

a secure OS shouldnt have security problems ANYWHERE!

now its a challenge to do that but it can be done.

its no different from security in real life, eg house security:

always lock the front door immediately after leaving or
entering the house. otherwise you will go out and
get distracted by someone and someone else gets in.

make sure the doors can only be locked via a key
(all modern house doors in the UK) that way you
cannot get locked out!


the main reason Windows is so insecure is that it is an active
paradigm. Windows CONTINUOUSLY launches stuff, eg
checks for updates and other rubbish.


thats like a hotel room where the staff just walk in and
do whatever they want. CAN YOU PLEASE KNOCK FIRST?

otherwise thieves can just walk in dressed as hotel staff also, cue the spyware and viruses and other Windows rubbish.

that applies to operating systems. Automatic active connection
to the internet is a HAZARD.


unfortunately Linux is heading that way, eg Ubuntu and Fedora now
automatically look for updates. Fedora 3 didnt. That is so pointless, most of those updates are totally unnecessary. Well written code doesnt need
much updating. If I experience a problem with some software
I will visit the homepage and download the latest version.

I dont want twenty unrecognisable updates that I never asked for
every day.

the whole point of betatesting is to stabilise the software
so it DOESNT NEED updating for some MONTHS.

too many things appear on the launch bar and
Windows by default allows remote users to view your machine.
good idea Mad


An OS like AmigaOS is PASSIVE, it never launches things
unilaterally, only the user launches stuff. That prevents
the endless problems. On the Amiga it is IMPOSSIBLE
for viruses to occur via emails as an email is just
some passive bytes.

the idea of email viruses is laughable Very Happy

only Microsoft makes it possible! Razz

But with Windows, Outlook Express automatically goes and
fetches ALL emails and you get all the various infections.

with Windows you can even catch viruses just by
clicking a webpage link. That is an astonishing level
of danger.

with the Amiga I use YAM from Unix and you query each
email 1 by 1 eg delete AT THE SERVER before it goes to the
computer. Because I use an emulator above XP any
emails with viruses then infect XP. They CANNOT infect
the Amiga but they do infect the emulator host XP.

with YAM for each email you can select:

load skip delete abort

load means accept the email,
skip means leave it at the server for the next session
delete means DELETE AT SERVER eg if it looks like SPAM
abort means ignore all the commands so far eg if you
change your mind.

right at the end the whole batch of "load" emails arrive

it doesnt transfer any emails up till then so you can
abort the process

Quote:

And as for "UNIX is secure!", just look at how many exploits BIND has had.


I have never heard of a Linux virus.

when I say Unix really I mean Linux, as its the defacto

user version of Unix today.

with Windows there are new viruses more or less every day.

every other junk email brings an infection,


Quote:

Segment registers sorta made sense in the 16bit days, and intel had to keep them for compatibility.


FALSE!

long mode doesnt have segment registers and it has compatibility.

YOU CAN DO ANYTHING AT ALL NEW via an unused rflags bit.

(long mode does have code segments but that is just a trick eg
to switch cpu mode for compatibility. if you only code with
long mode supervisor level once it is running you dont need any further code segments.

jmp [ label ]
call rax

etc.)


Quote:

Yes, they should have dumped a lot of stuff and done a redesign with the 80386, but they didn't - and there's not much we can do about it now.


the Motorola 68000 way back in 1986 had flat 32 bit addressing
as soon as you switch the machine on!

Intel architecture user mode is the MOST BACKWARD CPU on earth.

I have been coding 68k I think since 1989 and I have coded
MIPS both of which are just fine. I was utterly amazed to find
how PRIMITIVE x86 was in 2004, 15 years later. I bought an Intel celeron machine it was like going back to the stone age. That was my first PC,

I was very amused by the 16 bit segments, that is so ridiculous

AMD long mode is like a 64 bit version of the Motorola 68030MMU
which I bought around 1997, ELEVEN YEARS AGO.

68030MMU is really cool, it allows pages of 256 bytes, 512 bytes,
1024 bytes etc

and you can have variable size tables at each level of the MMU.

you can set up an MMU scheme which is exactly right.


ALL other CPUs such as MIPS, PPC, 68k, etc use flat addressing.

where Intel architecture IS GOOD is the supervisor side eg
their multiprocessor architecture is perfect, and
AMD cache coherency is also perfect.

on the supervisor side x86 is way ahead of Motorola and PPC.

:I give credit where credit is due and deny it where it is undeserved Mad

68k supervisor mode is a bit simple, but its nice having 2 MMUs.

I dont like PPC its too complicated to understand.

long mode x86 asm is very similar to 68k asm,

many of the opcodes are the same. but the bit encoding
is completely different. Both have 16 registers,

68k has d0, d1, d2, d3, d4, d5, d6, d7 (data registers)
a0, a1, a2, a3, a4, a5, a6, a7 (address registers)

x86 has: rax, rbx, rcx, rdx, rsi, rdi, rsp, rbp,
r8, r9, r10, r11, r12, r13, r14, r15

with 68k a7 is the stack pointer.

MIPS has 32 registers with r0 hardwired to zero.

Quote:

Amiga got fucked sideways by bad management. Also, while the OS had some cool technical stuff going for it, the consumer 68k machines didn't have memory protection, and as a whole it wasn't a very fulfilling experience (the games rocked, though).


the Amiga is an insecure OS however no viruses other than the
floppy disk boot sector viruses.


and that vulnerability is because the boot sector of floppies is
the ONLY ACTIVE thing on AmigaOS. All OS's have an active
boot sector but other OS's havent had viruses as nowadays
people dont use bootable apps.

the Amiga boot sector viruses would copy themselves to further
disks you inserted to the bootsector. They would wait and eventually announce:

something wonderful has happened, your amiga is alive!

The Amiga versus XP is proof that an OS doesnt need to be secure
to be safe. XP is secure and INFESTED, and AmigaOS is insecure but
never had any viruses other than the boot sector and that was
pretty easy to counteract by checking floppy boot sectors for
unrecognised bootcode.

the other problem with XP is it is TOO complicated, that is just
asking for security problems. It has too many system files.

Quote:

It seems you have misunderstood a bunch of things, and only have a superficial knowledge about other Smile


that is true for everyone with computing,

computing is by nature an informal subject, it is always
a bunch of various proprietory "products"

its not an objective science,

everyone in computing thinks they know everything Very Happy

but nobody knows much Mad

there isnt any objective truth in computing, its a question
of making good use of limited resources.

you can make any idea succeed in computing if you
try hard enough eg Intel have succeeded with
some truly dreadful ideas such as segment registers
and 16 bit offsets.

I may have superficial knowledge but I am creating a ton
of stuff with x86 asm.

you dont need much knowledge to succeed at computing,
you just need hard work and imagination.
Post 28 Aug 2008, 01:32
View user's profile Send private message Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
LocoDelAssembly wrote:
I was expecting the answer from lazer1 because I'm trying to understand what he is trying to say about Windows, he means that my WinXP don't use the Memory Management Unit for memory protection while Unix does? (now even you can reply this question if you want to Wink)


no,

they both use the MMU its just that

XP uses the MMU incompetently!

An MMU on its own wont give you security, just as a secure door
wont give you security if you lend the key out to people

(as they will duplicate the key)


only a MAD person would try to implement security with segments,

it probably can be done but why bother if you have an MMU which
gives you individual protections for each 4K page.
Post 28 Aug 2008, 01:37
View user's profile Send private message Reply with quote
lazer1



Joined: 24 Jan 2006
Posts: 185
lazer1
LocoDelAssembly wrote:
Quote:

MMUs have been the STANDARD way to do security for more than a decade, but you need the entire OS design to be "secure".


MMU = Memory Management Unit Question


the Motorola 68k series would have MMU in the name if
the chip had a MMU eg 68030MMU

thus in the 68k world MMU is established as meaning
Memory Management Unit,

I think outside of x86 the abbreviation is established,

other abbreviations we use are FPU to mean "floating point unit"
eg the 68030 had a separate FPU, there were 2 versions:
68881 and 68882. You could have a 68030 either with no FPU
or with a 68881 or with a 68882.

thus FPU became established terminology

with x86 its more complicated as its not clear to users
that the x87 is an FPU.

other terminology:
"emu" means emulator as there are several for the Amiga,
and there were PC and Mac emulators for the Amiga a long time
ago.

and "reimplementation" to mean something like Linux or
AROS where people have recreated a clone OS,

I guess its like for PCs, AGP and SATA and dual-core
and VGA are established terminology.

but x86's dont mention MMU when you go in a shop so
the terminology tends to be only known by people
who talk to people on other CPUs especially 68k
Post 28 Aug 2008, 01:51
View user's profile Send private message Reply with quote
LocoDelAssembly
Your code has a bug


Joined: 06 May 2005
Posts: 4633
Location: Argentina
LocoDelAssembly
Then, lazer1, we are both referring to the same thing but because of that another question arises: In which way does Linux implement less incompetently the memory protection via MMU than WinXP? And please don't take into account the amount of viruses available on each platform because that is much more user base and default user privileges dependent than real kernel design flaws (nearly all Windows costumers use "root" users instead of creating and using a limited user). Lets consider only the aspect of memory protection of both OSes alone.
Post 28 Aug 2008, 02:11
View user's profile Send private message Reply with quote
f0dder



Joined: 19 Feb 2004
Posts: 3170
Location: Denmark
f0dder
I can't really be bothered with this anymore, it's ludicrous Smile

lazer1, you write too much fluffy irrelevant text, and your examples/analogies are pretty bad. You also don't seem to get where problems/security hole are really located.
Post 28 Aug 2008, 05:15
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  
Goto page Previous  1, 2, 3, 4  Next

< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You can attach files in this forum
You can download files in this forum


Copyright © 1999-2020, Tomasz Grysztar. Also on YouTube, Twitter.

Website powered by rwasa.