If checking if a value is included in an Array is what takes so much time, perhaps Sketchup.require can use a Set?
To make things more interesting,
Kernal.require is replaced by the require in the Ruby gem system
C:/Program Files/SketchUp/SketchUp 2017/Tools/RubyStdLib/rubygems/core_ext/kernel_require.rb circa line 38
This new require calls into the Gem system:
Gem.find_unresolved_default_spec() in C:/Program Files/SketchUp/SketchUp 2017/Tools/RubyStdLib/rubygems.rb circa line 1118
which adds another layer of overhead
Drop this puts statement at line 39 or so in the kernel_require.rb file and you can follow all of the require calls.
puts 'require ā + path
Set over Array should not make much of a difference. (Basically the arrays used are being treated as a set.)
As Steve and Greg point out there is other overhead contributing. (The Dish I/O may time vary depending upon the speed of HDs or SSDs.)
Yes, but you could rewrite the kernel_require.rb require method to return immediately for duplicate loads of sketchup.rb, extensions.rb, etc. Just to see if this speeds things up or not.
Sorry I wasnāt more clear. Most of time in a require
is due to checking for file existence. Almost all ācodeā operations will take much less time than checking for a fileās existence.
Somewhat implicit in this thread is the idea that multiple requires for the same file exist.
For instance, plugin A loads z.rb from the std-lib, and then plugin B is loaded, and it also requires z.rb. I really donāt think the 2nd require is taking much time, at least in relation to the time it takes to load and parse z.rb. *.so files take even less time.
Or, a plugin has multiple files that require the same file that is part of the plugin. Since this is probably encrypted, I have no idea how SU modifies Rubyās require code.
As Iāve mentioned elsewhere, SU startup time is based on how many Ruby files (or Ruby code lines) it needs to load (and possibly decrypt) and parse. If everyone is loading every Ruby file in their extension, it will take longer, and will also use more memory.
So, adding a guard/conditional to require
statements may improve load time, but Iām not sure if the difference would be significant.
Perhaps extension version checks by the Extension Manager could also slow the load time.
If this is not already the case such checks could be limited to happen the first time you start SketchUp each day or something. I may start SketchUp a total of 10 times on a normal day for working on different files.
Good point, especially if one has a slow connection. Do you know if itās one call or a call for each extension?
Havenāt thought about how to set it up, but I wonder if the JIT feature in Ruby 2.6+ will affect how fast plugins run in SU. For that matter, it might allow plugins to be pre-compiled, but obviously there would need to be extensions for both macOS & Windowsā¦
It is per extension. (Ie, one extensionās version has no bearing upon the version of another. But this only applies to extensionās installed via Trimbleās Extension Warehouse.)
And itās not only version check, but depending upon the userās Extension Loading Policy, it could also involve the hash checking for all extension files.
Additional time is also spent decrypting rbe and rbs files.
My main idea is that any little bit of reclaimed speed will help.
I wasnāt sure if they (for instance) sent a json string with all the extension name/version info or did it one at a time. Multiple requests could certainly slow things down.
In some respects, extension warehouse is similar to other package mgmt systems (RubyGems, MYS2 pacman, etc). Most of those do not use multiple requests.
Bundler/RubyGems actually download the data (in user folder, see .bundle/cache
and .gem/spec
), but Bundler/RubyGems differ in that there are multiple versions available, whereas something like pacman, there is only one package availableā¦
I would assume only one request is sent to EW asking or the newest versions of the installed extensions. Everything else would be a huge waste of time.
Oh, understand now. I donāt know as that is an implementation detail that is not public knowledge.
Wireshark. Iāve had it installed on my last two systems, but not the one Iām currently working on. Itās been a while, and https can make things difficult.
Iām not really that interested in knowing ā¦ and Iād not be feel ācomfyā disclosing the results. (The EULA has some clauses that prohibit reverse engineering.)
I use fiddler (Fiddler | Web Debugging Proxy and Troubleshooting Solutions) to check what requests are being made from my machine.
I do not consider logging what your machine is doing āreverse engineeringā.
Now, to be on topic. A few months ago I had a conversation with @MSP_Greg about more or less the same topic. (On my phone right now so difficult to link in the topic) From my experience, requiring the same file again an again is not really so much of an issue, nor is the raw amount of ruby code per file. The real performance killer is the first time a unique file gets loaded.
For that reason I combine all my separate sources into 1 file upon release. I have a build script that automates that. This solves a few things:
Only 1 require has to be done
No problems with referencing encrypted files and thus using the sketchup require, since there are no other references.
If you want some proof: Send me a dummy extension consisting of multiple files and I will combine them so you can do a comparison.
This is done async in the background.
Was that using Sketchup.require
or the native require
?
Native
Agreed. Essentially said the same thing above.
nor is the raw amount of ruby code per file.
The real performance killer is the first time a unique file gets loaded
File size and ācode linesā determine how long it takes for Ruby to parse the file. Of course, this only occurs the āfirst timeā.
Reading one file will always take a shorter amount of time then reading multiple files totaling the same size.
Generally, Iām bothered by the idea that someone who has a large number of plugins installed may have a lot of Ruby code loaded/parsed that they only intermittently use.
If there wasnāt a performance hit from this, Ruby itself wouldnāt have all the separate rb & so filesā¦