Posted by matijs
31/07/2017 at 13h50
- Writing a contract such that the law is powerless to reverse it is
anti-democratic. Libertarians will probably love it, but in canceling out the
‘oppressive’ state it also cancels any protections offered by the state.
- Trust is a fundamental basis of human interaction. Creating a trustless way
of cooperating allows agents to not be held accountable for actions performed
outside the contract.
- Instead of the lame excuse ‘the law allows me to be an asshole’, we’ll get
‘the smart contract allows me to be an asshole’.
no comments
no trackbacks
Posted by matijs
10/04/2016 at 09h21
This is an anti-pattern that has bitten me several times.
Suppose you have an object hierarchy, with a superclass Animal, and several
subclasses, Worm, Snake, Dog, Centipede. The superclass defines the abstract
concept move
, which is realized in the subclasses in different ways, i.e., by
slithering or walking. Suppose that due to other considerations, it makes no
sense to derive Worm and Snake from a SlitheringAnimal, nor Dog and Centipede
from a WalkingAnimal. Yet, the implementation of Worm#move
and Snake#move
have a lot in common, as do Dog#move
and Centipede#move
.
One way to solve this is to provide methods walk
and slither
in the
superclass that can be used by the subclasses that need them. Because it makes
no sense for all animals be able to walk and slither, these methods would need
to be accessible only to subclasses (e.g., private in Ruby).
Thus, the superclass provides a toolbox of methods that can only be used by its
subclasses to mix and match as they see fit: a Private Toolbox.
This may seem an attractive course of action, but in my experience, this
becomes a terrible mess in practice.
Let’s examine what is wrong with this in more detail. I see four concrete
problems:
- It is not always clear at the point of method definition what a method’s
purpose is.
- Each subclass carries with it the baggage of extra private methods that
neither it nor its subclasses actually use.
- The superclass’ interface is effectively extended to its non-public methods,
- New subclasses may need to share methods that are not available in the
superclass.
The Animal superclass shouldn’t be responsible for the ability to slither and
to move. If we need more modes, we may not always be able to add them to the
superclass.
We could extract the modes of movement into separate helper classes, but in
Ruby, it is more natural to create a module. Thus, there would be modules
Walker and Slitherer, each included by the relevant subclasses of Animal. These
modules could either define move
directly, or define walk
and slither
.
Because the methods added in the latter case would actually makes sense for the
including classes, there is less need to make them private: Once could make a
instance of Dog walk, either by calling move
, or by calling walk
directly.
This solves all four of Private Toolbox’ problems:
- The module names reveal the purpose of the defined methods.
- Subclasses that do not need a particular module’s methods do not include it.
- The implementor of Animal is free to change its private methods.
- If a new mode of transportation is needed, no changes to Animal are needed.
Instead, a new module can be created that provides the relevant functionality.
Tags
patterns, programming, ruby
no comments
no trackbacks
Posted by matijs
02/04/2016 at 16h55
I always like extra developer tooling to be minimally intrusive, to avoid
forcing it on others working with the same code. There are several aspects to this:
Presence of extra gems in the bundle, presence and visibility of extra files in the
repository, and presence of extra code in the project.
For this reason, I’ve been reluctant to introduce tools like guard or some of
the Rails preloaders that came before Spring. On the other hand, no-one would
be bothered by my occasional running of RuboCop, Reek or pronto.
In this light, I’ve always found SimpleCov a little too intrusive: It needs to
be part of the bundle, and the normal way to set things up makes it rather
prominently visible in your test or spec helper. Nothing too terrible, but I’d
like to just come to a project, run something like simplecov rake spec
, and
have my coverage data.
I haven’t reached that blissful state of casual SimpleCov use yet, but I’m
quite pleased with what we achieved for Reek.
Here’s what we did:
- Add
simplecov
to the Gemfile
- Add a
.simplecov
file with configuration:
SimpleCov.start do
track_files 'lib/**/*.rb'
add_filter 'lib/reek/version.rb'
end
SimpleCov.at_exit do
SimpleCov.result.format!
SimpleCov.minimum_coverage 98.9
SimpleCov.minimum_coverage_by_file 81.4
end
- Add
-rsimplecov
to the ruby_opts
for our spec task:
RSpec::Core::RakeTask.new('spec') do |t|
t.pattern = 'spec/reek/**/*_spec.rb'
t.ruby_opts = ['-rsimplecov -Ilib -w']
end
This has several nice features:
First, there are no changes to spec_helper.rb
. That file can get pretty
cluttered, so the less has to be in there, the better.
Second, it only calculates coverage when running the full suite with rake spec
. This means running just one spec file while developing won’t clobber
your coverage data, and it makes running single specs a little faster since it
doesn’t need to update the coverage reports.
Third, it enforces a minimum coverage per file and for the whole suite. The
second point helps a lot in making this practical: Otherwise, running
individual specs would almost always fail due to low coverage.
no comments
no trackbacks
Posted by matijs
25/09/2015 at 08h11
I just realized one important factor for attracting casual open source contributions is code/repo size. A huge repo is a barrier. So, it’s hugely important to either use off-the-shelf libraries, or split off parts of your code into their own components. These components need to live in their own repository, so no monorepo’s.
Of course, a high-status, high-visibility project can get away with more. Rails, for example, has all its components in one repository and does not seem to be lacking in contributions. On the other hand, for a long time Gnome required the full source for everything to be checked out and built together. This requires a serious commitment for even the most trivial bug fixes.
Why the sudden insight? A project I’m involved in has problems with wkhtmltopdf: The version that used to work crashes after a server upgrade, and the version that works has problems with fonts and images. A simple solution could be to just recompile the old version on the new server. However, because it essentially forks all of Qt, checking out the source will require 1GB of disk space, while building will require another 2.5GB (and a commensurate amount of time). This is not undertaken lightly.
no comments
no trackbacks
Posted by matijs
28/07/2015 at 10h52
Because of a pull request I was working on, I had cause to benchmark activesupport’s #try
. Here’s the code:
require 'benchmark'
require 'active_support/core_ext/object/try'
class Bar
def foo
end
end
class Foo
end
bar = Bar.new
foo = Foo.new
n = 1000000
Benchmark.bmbm(15) do |x|
x.report(’straight’) { n.times { bar.foo } }
x.report(’try - success’) { n.times { bar.try(:foo) } }
x.report(’try - failure’) { n.times { foo.try(:foo) } }
x.report(’try on nil’) { n.times { nil.try(:foo) } }
end
Here is a sample run:
Rehearsal ---------------------------------------------------
straight 0.150000 0.000000 0.150000 ( 0.147271)
try - success 0.760000 0.000000 0.760000 ( 0.762529)
try - failure 0.410000 0.000000 0.410000 ( 0.413914)
try on nil 0.210000 0.000000 0.210000 ( 0.207706)
------------------------------------------ total: 1.530000sec
user system total real
straight 0.140000 0.000000 0.140000 ( 0.143235)
try - success 0.740000 0.000000 0.740000 ( 0.742058)
try - failure 0.380000 0.000000 0.380000 ( 0.379819)
try on nil 0.210000 0.000000 0.210000 ( 0.207489)
Obviously, calling the method directly is much faster. I often see #try
used defensively, without any reason warrented by the logic of the application. This makes the code harder to follow, and now this benchmark shows that this kind of cargo-culting can actually harm performance of the application in the long run.
Some more odd things stand out:
- Succesful
#try
is slower than failed try plus a straight call. This is because #try
actually does some checks and then calls #try!
which does one of the checks all over again.
- Calling
#try
on nil
is slower than calling a nearly identical empty method on foo
. I don’t really have an explanation for this, but it may have something to do with the fact that nil
is a special built-in class that may have different logic for method lookup.
Bottom line: #try
is pretty slow because it needs to do a lot of checking before actually calling the tried method. Try to avoid it if possible.
Tags
benchmark, programming, ruby
no comments
no trackbacks
Posted by matijs
30/01/2014 at 06h16
These past few days, I’ve been busy updating RipperRubyParser to make it compatible with RubyParser 3. This morning, I discovered that one thing that was changed from RubyParser 2 is the parsing of negations.
Before, !foo
was parsed like this:
s(:not, s(:call, nil, :foo))
Now, !foo
is parsed like this:
s(:call, s(:call, nil, :foo), :!)
That looks a lot like a method call. Could it be that in fact, it is a method call? Let’s see.
Tags
ruby, software
no comments
no trackbacks
Posted by matijs
19/01/2014 at 11h33
- Things needed every day
- Things needed every week
- Things needed only during a certain season
- Things needed for administrative purposes
- Things kept for sentimental reasons
- Thinks kept for beauty
Tags
life
no comments
no trackbacks
Posted by matijs
02/03/2013 at 16h42
Yesterday, I read Alex Gaynor’s slides on dynamic language
speed.
It’s an interesting argument, but I’m not totally convinced.
At a high level, the argument is as follows, it seems:
- For a comparable algorithm, Ruby et al. do much more work behind the
scenes than ‘fast’ languages such as C.
- In particular, they do a lot of memory allocation.
- Therefore, we should add tools to those languages that allow us to do
memory allocation more efficiently.
Tags
benchmarks, ruby, software, speed
no comments
no trackbacks
Posted by matijs
17/02/2013 at 20h15
I love Travis CI. I love git bisect. I used both recently to track down
a bug in GirFFI.
Suddenly, builds were failing on JRuby. The problem did not occur on my
own, 64 bit, machine, so it seemed hard to debug. I tried making Travis
use different JVMs, but that didn’t help, apart from crashing in a
different way (faster, too, which was nice).
Building a Travis box
Using the travis-boxes repository, I created a VM as used by Travis.
This is currently not documented well in the READMEs, so I’m writing it
down here, slightly out of order of actual events.
I cloned the following three repositories:
travis-cookbooks
travis-boxes
veewee-definitions
First, I created a base box in veewee-definitions, according to its
README. In this case, I created a precise32 box, since that’s the box
Travis uses for the builds. The final, export, stage creates a
precise32.box file.
Then, I moved the precise32.box file to travis-boxes/boxes, making a
base box available there. There is a Thor task to create just such a
base box right there, but it doesn’t work, and seems to be deprecated
anyway, since veewee is no longer supposed to be used in that
repository.
So, a base box being available in travis-boxes, I used the following to
create a fully functional box for testing Rubies:
bundle exec thor travis:box:build -b precise32 ruby
Oddly, this didn’t produce a box travis-ruby, but it did produce
travis-development, which I could then manipulate using vagrant.
Hunting down the bug
I ssh’d into my fresh travis box using vagrant ssh. After a couple of
minutes getting to know rvm (I use rbenv myself), I was able to confirm
the crash on JRuby. After some initial poking around trying to pin down
the problem to one particular test case and failing, I decided to use
git bisect. As my check I used the test:introspection task, which
reliably crashed when the problem was present.
While it’s possible to automate git bisect, I like to use it manually,
since a particular test used may fail for unrelated reasons. Also, since
git bisect is a really fast process, there is a pleasent lack of tedium.
Anyway, after a couple of iterations, I was able to locate
the problematic commit.
By checking the different bits of the commit I then found the culprit: I
accidentally broke the code that creates layout definitions, in
particular the one used by GValue. Going back to master, I added
a simple test and fix.
I will have to revisit the code later to clean it up and make it more
robust.
Tags
GirFFI, git, github, software, travis
no comments
no trackbacks
Posted by matijs
04/11/2012 at 13h34
Once upon a time, there was only UnifiedRuby, a cleaned up
representation of the Ruby AST.
Now, what do we have?
-
RubyParser before version 3; this is the UnifiedRuby format:
RubyParser.new.parse "foobar(1, 2, 3)"
# => s(:call, nil, :foobar, s(:arglist, s(:lit, 1), s(:lit, 2), s(:lit, 3)))
-
RubyParser version 3:
Ruby18Parser.new.parse "foobar(1, 2, 3)"
# => s(:call, nil, :foobar, s(:lit, 1), s(:lit, 2), s(:lit, 3))
Ruby19Parser.new.parse "foobar(1, 2, 3)"
# => s(:call, nil, :foobar, s(:lit, 1), s(:lit, 2), s(:lit, 3))
-
Rubinius; this is basically the UnifiedRuby format, but using Arrays.
"foobar(1,2,3)".to_sexp
# => [:call, nil, :foobar, [:arglist, [:lit, 1], [:lit, 2], [:lit, 3]]]
-
RipperRubyParser; a wrapper around Ripper producing UnifiedRuby:
RipperRubyParser::Parser.new.parse "foobar(1,2,3)"
# => s(:call, nil, :foobar, s(:arglist, s(:lit, 1), s(:lit, 2), s(:lit, 3)))
How do these fare with new Ruby 1.9 syntax? Let’s try hashes. RubyParser
before version 3 and Rubinius (even in 1.9 mode) can’t handle this.
-
RubyParser 3:
Ruby19Parser.new.parse "{a: 1}"
# => s(:hash, s(:lit, :a), s(:lit, 1))
-
RipperRubyParser:
RipperRubyParser::Parser.new.parse "{a: 1}"
# => s(:hash, s(:lit, :a), s(:lit, 1))
And what about stabby lambda’s?
-
RubyParser 3:
Ruby19Parser.new.parse "->{}"
# => s(:iter, s(:call, nil, :lambda), 0, nil)
-
RipperRubyParser:
RipperRubyParser::Parser.new.parse "->{}"
# => s(:iter, s(:call, nil, :lambda, s(:arglist)),
# s(:masgn, s(:array)), s(:void_stmt))
That looks like a big difference, but this is just the degenerate case.
When the lambda has some arguments and a body, the difference is minor:
-
RubyParser 3:
Ruby19Parser.new.parse "->(a){foo}"
# => s(:iter, s(:call, nil, :lambda),
# s(:lasgn, :a), s(:call, nil, :foo))
-
RipperRubyParser:
RipperRubyParser::Parser.new.parse "->(a){foo}"
# => s(:iter, s(:call, nil, :lambda, s(:arglist)),
# s(:lasgn, :a), s(:call, nil, :foo, s(:arglist)))
So, what’s the conclusion? For parsing Ruby 1.9 syntax, there are really
only two options: RubyParser and RipperRubyParser. The latter stays
closer to the UnifiedRuby format, but the difference is small.
RubyParser’s results are a little neater, so RipperRubyParser should
probably conform to the same format. Reek can then be updated to use the
cleaner format, and use either library for parsing.
Tags
ripper, ruby, sexp, software
no comments
no trackbacks