Thanks for taking the time with your replies.
First, you mentioned the term ‘Rubyist’, which is a term I used in my profile. I should probably change that, but regardless, I’m a coder.
Been coding since the seventies as a teenager, started in CompSci, my ‘Gang of Four Design Patterns’ book is a very early printing, and I’ve written a lot of ‘MSFT’ code, both before and after .NET. I could still be an idiot, but I think most people place me somewhere above that.
First of all, with the multiple files/one file issue, if s is the average size of the files, and n is the number, as the ratio n/s rises, any measurement will be more about timing the OS’s opening of files instead of Ruby’s parsing of them. As to what values of n/s pertain to typical plugins, I don’t know.
.NET languages are essentially strongly typed compiled languages, while Ruby is an ‘untyped’ interpreted language. Ruby is also a ‘console language’, and using a ‘single instance’ of it in a UI based application like SU causes issues not seen when it’s a console app.
So, with SU’s embedded Ruby, one might look at Rubygems and Rails, as both are applications that often run for a long time. Both make use of an ‘autoload’ concept. Multiple files, load on demand.
I think some of the motivation for your ‘one file’ system was based on using gems/std-lib’s. One issue with them is that they may load a lot more files than needed for a particular task that they perform. That makes CI testing easier, as loading the main file also loads everything else. And since many are open source…
So, since one of the articles you cited was .NET with C# examples, maybe we should consider why .NET has so many assemblies/files instead of just one? Do you load all of them in your .NET/C# code? Why don’t c programs load every Windows dll, instead of only the ones they use?
I believe 1 core thing Ruby does wrong: it makes 0 distinction between source code and runtime code.
I having a hard time with that statement. Feel free to cite any articles/blogs, etc that share that opinion. In what way does the phrase ‘distinction between source code and runtime code’ apply to C#? How does it apply to an interpreted language like Ruby?
Normally, C# compiles several source files into a few runtime files. Ruby does the same in it’s VM. Where’s the difference?
Poor code? Use modules for everything
All my extensions up till around 18 months ago used modules quite a lot for what would be singletons, such as data repositories, because that made them globally accessible. That was slow on boot. Because the entire repo would be created and stored in memory even if the user might never use the app.
You seem to be conflating what code runs when it’s loaded by Ruby with whether that code is contained in a class or a module. Modules do not need to run any code when loaded. Most Ruby apps that use classes for singleton objects could be re-written with modules and perform exactly the same way. It’s just the norm to use classes…
Your posts have bridged several topics. Summing up my thoughts:
- I believe code should be loaded in an ‘on demand’ fashion.
- Using some gems and/std-libs may load a lot more code than is required. How best to address that is messy.
- Combining multiple files into one may be helpful, but may also load a lot of code that is rarely used by the SU user.
One topic I haven’t seen addressed is what should object lifetime be? Even with autoload/load-on-demand, one still has the issue object lifetime.
So a plugin loads a file that displays/controls/interacts with a user dialog box. After the user closes the dialog, should the dialog instance be destroyed?
If a user has several plugins installed, and opens several dialogs in the course of a days work, does that affect SU? Same issue with other objects, whether they’re part of the SU API or native to Ruby. I’m guessing not, but I don’t know…