flat assembler
Message board for the users of flat assembler.

Index > Main > Let `s talk about sections '.data' and '.bbs' in big project

Author
Thread Post new topic Reply to topic
Roman



Joined: 21 Apr 2012
Posts: 1878
Roman 29 Jul 2018, 12:26
In c++ i see one big plus !
It`s all parameters\variables are located with function\code

For small project Fasm nice. But if we used more than 100 variables in section '.data' and '.bbs'

One time in one project i write right mouse button popup menu and athers procs.
To try.
Then(after couple months) I decided write new RichEdit project. RichEdit have more than 40 datas variables.
And i try using code in prevision project(right mouse button popup menu).

Two project have many data. I replace 20 variables data from right mouse button popup menu to the new project. And right mouse button popup menu did not work correctly in RichEdit project.
I spent two days and find error.
I use one name "menu" in two different projects !!!
In RichEdit menu main_menu
menuitem '&File', 0, MFR_POPUP ;+ MFR_END

In project right mouse button popup menu its menu dd menuA,menuB

Very compleceted write code in three places !!!
And i spent hours for replace all variables and code to another project !!!
Because i must write code in section '.code', then write all data in section '.data' and
then write all variables in section '.bbs'. Its took many times !

My proposition do in Fasm macro or preprocessing.
All @bb\@bw\@bd and @bb_\@bd_\@bw_ fasm must sort on section '.data' and '.bbs'
My vision Example(all exist in one place data and code):

Code:
;this data parse in section '.data' readable writeable
@bb (its mean db 0)  name1,name2,name3,...,name100000
@bw(dw 0)  name1,name2,name3,...,name100000
@bd(dd 0)  name1,name2,name3,...,name100000
;this data parse in section '.bbs' readable writeable
@bb_ (rb 0)  V_name1,V_name2,..,V_name100000
@bw_ (rw 0)  V_name1,V_name2,..,V_name100000
@bd_ (rd 0)  V_name1,V_name2,..,V_name100000

;Procs or code using this variables
;Easy read and easy copy and paste this code ! All exist in one place !
@bb   nameA 'Text nameA',0
@bb   nameB 'Text nameB',0
@bd_  V_nameA,V_nameB  ;I write V_nameA because its abstraction and we know its bbs data 

mov eax,nameA
mov [V_nameA],eax
mov ecx,nameB
mov [V_nameB],ecx

@bb   nameC 'Text nameC',0
@bb   nameD 'Text nameD',0
@bd_  V_nameC,V_nameD  

mov eax,nameC
mov [V_nameC],eax
mov ecx,nameD
mov [V_nameD],ecx
    


This method speed up write code and all(variables\data\code) will be in one place\file !
We just copy or include this file and Fasm all data and code sorting automaticli !
This also eliminates the errors with the date and redefination data.
More comfortable write code.


@bd_ i write for example. Perhaps it's better to write(because short) @vd V_nameA


Last edited by Roman on 29 Jul 2018, 15:53; edited 4 times in total
Post 29 Jul 2018, 12:26
View user's profile Send private message Reply with quote
Roman



Joined: 21 Apr 2012
Posts: 1878
Roman 29 Jul 2018, 12:57
And my question: Now Fasm can have several section '.data' or '.bbs' ?
I mean this:

section '.data' data readable writeable
p1 dd 0
section '.data' data readable writeable
p2 dd 0
section '.bbs' data readable writeable
V1 rd 1
section '.bbs' data readable writeable
V2 rd 1
Post 29 Jul 2018, 12:57
View user's profile Send private message Reply with quote
Ali.Z



Joined: 08 Jan 2018
Posts: 772
Ali.Z 29 Jul 2018, 14:51
yes.

_________________
Asm For Wise Humans
Post 29 Jul 2018, 14:51
View user's profile Send private message Reply with quote
Tomasz Grysztar



Joined: 16 Jun 2003
Posts: 8367
Location: Kraków, Poland
Tomasz Grysztar 29 Jul 2018, 15:46
There are some macros that may help you group code and data from different places in source into a single section.

Defining separate copies of a section is generally not a recommended thing when assembling an executable directly. With object files it is a different story, a linker is going to combine them anyway. But in executables this would be wasteful at least.
Post 29 Jul 2018, 15:46
View user's profile Send private message Visit poster's website Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 29 Jul 2018, 16:04
Actually, having global variables is a terribly bad thing from the POV of testability. Only a few things in a project are really global (say, you might want to save the value returned by GetProcessHeap to avoid calling it every now and then), all the “variables” except those few things are not global by their nature: use local variables or dynamically allocated memory. Having your window data in data section decreases your code reusability.

What should really go into data sections if you want to avoid crappy code:
* really global stuff (mentioned earlier);
* initialized data that is used as constants and/or variables with predefined values.

As an example for the second case: you might want to fill WNDCLASSEX structure at compile time but you’ll only get valid values for hIcon, hCursor, hIconSm at runtime, so such a structure goes into data section but is slightly changed at runtime. Anyway, this information is also application-global, since you have to register window classes only once.

P.S. Actually, allowing to declare variables in arbitrary places is one of the worst C/C++ features since it encourages mixing code and data which, in turn, leads to maintenance problems. Having your data defined in a single place (within a project, unit (not applicable to C/C++), function, method), on the contrary, lets you see clearly when it’s time to refactor your code.
Post 29 Jul 2018, 16:04
View user's profile Send private message Visit poster's website Reply with quote
Roman



Joined: 21 Apr 2012
Posts: 1878
Roman 29 Jul 2018, 16:50
Можно по русски писать ответы.
Post 29 Jul 2018, 16:50
View user's profile Send private message Reply with quote
Roman



Joined: 21 Apr 2012
Posts: 1878
Roman 29 Jul 2018, 17:08
DimonSoft интересно.
Натолкнул на мысль что иногда реально удобнее не описывать дату и отдельно каждую переменную(присваивая им имена), а выделить через WinAPI кусок памяти(например на 64 кб) и работать с ним как с массивом.

Удобно тем что разные проекты через WinAPI выделяют кусок памяти.
А когда я совмещу их в месте то каждый выделит свои участки памяти и не перебъет чужие выделения.
Что более проще в реализации, так же меньще проблем и писанины.
Post 29 Jul 2018, 17:08
View user's profile Send private message Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 2595
Furs 29 Jul 2018, 17:15
DimonSoft wrote:
P.S. Actually, allowing to declare variables in arbitrary places is one of the worst C/C++ features since it encourages mixing code and data which, in turn, leads to maintenance problems. Having your data defined in a single place (within a project, unit (not applicable to C/C++), function, method), on the contrary, lets you see clearly when it’s time to refactor your code.
What do you mean by this? Also C/C++ have units, the .c or .cpp files themselves (they're called translation units), but idk what you have in mind.
Post 29 Jul 2018, 17:15
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 05 Sep 2018, 08:31
Furs wrote:
DimonSoft wrote:
P.S. Actually, allowing to declare variables in arbitrary places is one of the worst C/C++ features since it encourages mixing code and data which, in turn, leads to maintenance problems. Having your data defined in a single place (within a project, unit (not applicable to C/C++), function, method), on the contrary, lets you see clearly when it’s time to refactor your code.
What do you mean by this? Also C/C++ have units, the .c or .cpp files themselves (they're called translation units), but idk what you have in mind.

They’re not real units and in fact one could say FASM has the same ones since the semantics are basically the same. Include directive doesn’t magically make a language have units (note also that #include has not been part of C at the moment of its introduction but part of a language for a separate tool called preprocessor).

Units in C/C++ are basically emulated by means of separate tools. A unit generally consists of interface and implementation parts: the first one describes unit’s programming interface, i.e. the elements available to other units, the second one is the actual internals of the unit.

Consider a language where units are not emulated, say, Delphi. You can safely declare a variable both in interface and implementation sections and don’t have to worry about duplicate declarations and the declaration will stay the same no matter where and for what reason it is defined. Besides, you can have same identifiers exported from different modules and the language provides a straightforward means to distinguish between them, just by specifying fully-qualified name.

Compare this to C/C++ where you have your typical unit consist of two files (header and code) and you’d better not put variable declaration inside header file unless the declaration is marked with extern which basically means it is not a variable declaration but a reference to a variable that is declared somewhere else. Having same identifiers exported from multiple units means you’re stuck with renaming them since linker knows little to nothing about scopes. You can fake this with namespaces in C++ but that is basically a hack.
Post 05 Sep 2018, 08:31
View user's profile Send private message Visit poster's website Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 05 Sep 2018, 08:37
Roman wrote:
DimonSoft интересно.
Натолкнул на мысль что иногда реально удобнее не описывать дату и отдельно каждую переменную(присваивая им имена), а выделить через WinAPI кусок памяти(например на 64 кб) и работать с ним как с массивом.

Удобно тем что разные проекты через WinAPI выделяют кусок памяти.
А когда я совмещу их в месте то каждый выделит свои участки памяти и не перебъет чужие выделения.
Что более проще в реализации, так же меньще проблем и писанины.

Пахнет неприятностями. Данные могут иметь разные размеры — уже не настоящий массив. По мере разработки и сопровождения набор переменных может изменяться, а значит, будут изменяться и индексы. Вместо фиксированных индексов использовать именованные константы при этом было бы разумно, но… это ли не будут те самые метки в секции данных?

Не уловил, чьи чужие выделения здесь могут беспокоить. Если речь о том, что несколько модулей станут частями одного проекта и каждый выделяет глоабльыне данные для себя в динамической памяти, то это может иметь смысл, но есть опасения, что будут проблемы с порядком инициализации, с тем, что на очередном шаге выделение памяти может зафейлиться и т.п. Плюс случайный выход за границу буфера в динамической памяти, как по мне, в среднем более критичен, чем в статической.
Post 05 Sep 2018, 08:37
View user's profile Send private message Visit poster's website Reply with quote
Furs



Joined: 04 Mar 2016
Posts: 2595
Furs 05 Sep 2018, 14:23
DimonSoft wrote:
Compare this to C/C++ where you have your typical unit consist of two files (header and code) and you’d better not put variable declaration inside header file unless the declaration is marked with extern which basically means it is not a variable declaration but a reference to a variable that is declared somewhere else. Having same identifiers exported from multiple units means you’re stuck with renaming them since linker knows little to nothing about scopes. You can fake this with namespaces in C++ but that is basically a hack.
I think you're mixing up terms and concepts in respect to C/C++ (I'll just say C++ from now).

What you refer to as "variable declaration" is actually a definition. It defines the variable to live in that (translation) unit, so it should be placed only in that file (otherwise you have multiple definitions, which is a violation of the one-definition-rule).

There's an exception if you use the inline keyword for functions (and variables for C++17 and beyond): this tells the compiler that the function/variable is defined exactly the same in every unit, same tokens and all, so it's ok to have multiple definitions in different units, the linker will just pick one arbitrarily and discard the others.

The extern keyword is actually a variable declaration: it tells the unit that variable exists with that name and type. It doesn't define the variable though. Think of it doing something like:
Code:
mov eax, variable_label    
in code where it is used, but without the
Code:
variable_label db 5    
definition. Obviously, you need it in C++ since it has a type associated with it.

Functions are extern by default, so stuff like:
Code:
void foo();    
is a declaration. The definition would have the body block {} and so on.

Declarations are needed to tell one unit that such a name exists at all. It can't scan other translation units since they're kept separate so...


You can also have internal linkage, which you can imagine as the function/variable/type only being available in that unit (internal). static keyword or anonymous namespaces can be used for that purpose. You can think of it as getting a unique name for each unit. Always use internal linkage unless you want that function to be "visible" to other units.

This is why headers provide only declarations or inline definitions: since they get copy-pasted via #include, they don't really include any definitions (see exception with inline above). They just tell the compiler that "hey, this name exists somewhere else and is of this type, so when you encounter it, you know how to compile the code to reference it".

That's why headers provide the interface but not the implementation (definition).
Post 05 Sep 2018, 14:23
View user's profile Send private message Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 05 Sep 2018, 20:36
English is not my native language and yes, mixing these two terms has been my problem for years. But I’m aware of the way a C/C++ project is divided into parts that pretend to be units.

What I’m talking about is that it is quite a bold generalization to use the word “unit” for a set of rules to distribute declrations and definitions throughout multiple files plus a specific means of passing only some of them to the compiler, then passing object files generated to the linker. The only way I see to do that is to never have seen anything better and more comfortable.

Preprocessor has initially been a separate tool, and after it had its job done all we got was a set of files containing “unit” contents plus all the declarations and inline definitions (thanks for bringing the right terms here) from other “units”. Then compiler and linker would work one after another to remove the unnecessary junk caused by blindly including the whole header file multiple times (leading to processing the declarations multiple times, once for each inclusion). The fact that preprocessor is usually integrated into compiler these days doesn’t change the concept.

As I’ve already started comparing it to Delphi, I’ll go on this way. Delphi provides equivalents for all the #include, #ifdef and #define stuff but noone would ever want to use them for splitting their project into units. Because inclusion is not unit support. Having initialization and finalization sections for units with well-defined order of execution, avoiding multiple passes over the same piece of code leading to significant increase in building speed, being able to share your unit in compiled form without any sources (say, for a shareware library of controls or algorithms), choosing between same-named identifiers from multiple units no matter how they’re written, etc.—this stuff (off the top of my head) can hardly be achieved with the rudimentary way C/C++ emulate units. All that costs is that the language is aware of the concept of unit.
Post 05 Sep 2018, 20:36
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 06 Sep 2018, 01:29
DimonSoft wrote:

What I’m talking about is that it is quite a bold generalization to use the word “unit” for a set of rules to distribute declrations and definitions throughout multiple files plus a specific means of passing only some of them to the compiler, then passing object files generated to the linker. The only way I see to do that is to never have seen anything better and more comfortable.


Yeah, C is a bit archaic, but even other contemporaries in the '70s didn't have good modularity yet (e.g. original Pascal).

DimonSoft wrote:

Preprocessor has initially been a separate tool, and after it had its job done all we got was a set of files containing “unit” contents plus all the declarations and inline definitions (thanks for bringing the right terms here) from other “units”.


I guess early C was inspired by Ratfor and Kernighan's Software Tools. So they used "macro", then m3, then m4, then wrote cpp. IIRC, Plan9's compiler has its own half-baked subset of cpp built-in, but if you need the full ANSI support, it uses dmr's old one (presumably modified a bit for better C99 handling).

DimonSoft wrote:

Then compiler and linker would work one after another to remove the unnecessary junk caused by blindly including the whole header file multiple times (leading to processing the declarations multiple times, once for each inclusion). The fact that preprocessor is usually integrated into compiler these days doesn’t change the concept.


It definitely complicates things. IIRC, the whole C++ standardization of modules is still unfinished, and two big implementations (Clang and MSVC) haven't agreed on what to do with macros. (But I don't really understand or use C++, just mildly curious.) I'd be surprised if that wasn't ironed out by C++20, which would be "a good thing".

DimonSoft wrote:

As I’ve already started comparing it to Delphi, I’ll go on this way. Delphi provides equivalents for all the #include, #ifdef and #define stuff but noone would ever want to use them for splitting their project into units.


Turbo Pascal introduced units in version four (1987 or such). So yes, before that, you had to use non-standard {$include}. Extended Pascal (ISO 10206) had modules circa 1988, but that was unpopular. Even Modula-2 had modules (obviously) since its inception circa 1979. Of course, Oberon combined definition and implementation into one file with '*' export marker, for better simplicity (and to avoid repeating text, thus always keeping two files in sync, etc).

DimonSoft wrote:

Because inclusion is not unit support. Having initialization and finalization sections for units with well-defined order of execution, avoiding multiple passes over the same piece of code leading to significant increase in building speed, being able to share your unit in compiled form without any sources (say, for a shareware library of controls or algorithms), choosing between same-named identifiers from multiple units no matter how they’re written, etc.—this stuff (off the top of my head) can hardly be achieved with the rudimentary way C/C++ emulate units. All that costs is that the language is aware of the concept of unit.


In theory, yes, all these Wirth-ian languages should build correctly without makefiles. And yes, when it is supported (usually but not always!), it's very convenient. For TP (and Delphi? which I don't use), you only need .TPU (or .DCU ?), but Free Pascal needs .PPU and .O (plus docs on how to use it) when redistributing. It's similar in other Wirth languages but the actual filenames and formats vary by compiler. There's no getting around that outside of portable bytecode, which most don't use (although nothing is stopping them from using that to generate truly native code that isn't slowly interpreted every time). I do think makefiles are a kludge and a pain, very unportable, but that can be mitigated with enough experience (but can't everything?).
Post 06 Sep 2018, 01:29
View user's profile Send private message Visit poster's website Reply with quote
rugxulo



Joined: 09 Aug 2005
Posts: 2341
Location: Usono (aka, USA)
rugxulo 06 Sep 2018, 01:39
rugxulo wrote:

Turbo Pascal introduced units in version four (1987 or such). So yes, before that, you had to use non-standard {$include}. Extended Pascal (ISO 10206) had modules circa 1988, but that was unpopular. Even Modula-2 had modules (obviously) since its inception circa 1979. Of course, Oberon combined definition and implementation into one file with '*' export marker, for better simplicity (and to avoid repeating text, thus always keeping two files in sync, etc).


Just to clarify, I believe combining interface and implementation into one file is a good idea (e.g. TP or EP). But having them separate (e.g. Modula-2 or Modula-3) was, IIRC, in case you wanted to expose a different interface from the same implementation. But isn't that fairly rare? The idea was to strongly hide irrelevant information, let people only work on parts that belong to them, not to change/break interfaces later. But I do think separate interface/implementation files can overcomplicate everything when you start getting into dozens of modules. Of course, you need an automated tool (browser?) to only reveal the public interface in such a combined text file (e.g. Oberon), but that's only a minor inconvenience.

So there are some people who prefer it one way or another, even if slightly more tedious in minor regards.

Honestly, you can overcomplicate anything if you're not careful. Too many separate files/units/modules is "a bad thing", IMHO. But sometimes it can't be helped. I do also think that rebuilding anything should be as simple as possible. (Even many Wirth-ian language implementations are far from immune from that, sadly.) Convoluted makefiles (esp. if too *nix-oriented) as annoying. I'd rather have a slow shell script, even if rebuilding from scratch every time, rather than a brittle makefile that doesn't even work! (BTW, I'm not aware of a lot of POSIX makefiles, most people seem to prefer GNU syntax. It's nicer to not rely on vendor-specific tools, but that's rarely easy.)

I'd still recommend that Furs look into learning FPC/Delphi eventually since it has a lot going for it. I think DimonSoft would agree with me. Cool
Post 06 Sep 2018, 01:39
View user's profile Send private message Visit poster's website Reply with quote
DimonSoft



Joined: 03 Mar 2010
Posts: 1228
Location: Belarus
DimonSoft 06 Sep 2018, 23:27
rugxulo wrote:
I think DimonSoft would agree with me. Cool

I find it surprising how many ideas and implementations popular these days are in fact very old and had already been rejected back in the past while a lot of nice things that seem to be obvious are underrated and left without proper attention.

Modern Pascal/Delphi ecosystem is only one of the examples but this example is probably the most noticeable. Having been developed in competition with C/C++ branch while being among the most popular ones Pascal/Delphi got a lot of useful stuff. At the very least, Delphi has been known to support brand new stuff long before it became obvious if the stuff would last long, and being able to introduce such support requires a lot of flexibility from the language and tools which is a good thing to learn about.
Post 06 Sep 2018, 23:27
View user's profile Send private message Visit poster's website Reply with quote
Display posts from previous:
Post new topic Reply to topic

Jump to:  


< Last Thread | Next Thread >
Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
You cannot attach files in this forum
You can download files in this forum


Copyright © 1999-2025, Tomasz Grysztar. Also on GitHub, YouTube.

Website powered by rwasa.