How solid are
SOLID coding principles?
I am no programmer, but software architects do attend my enterprise and
solution architecture classes.
And we do dip into software architecture now and then, for example
discussing issues
with microservices.
Occasionally, software developers refer to the SOLID principles.
These principles, promoted by Robert Martin in the 1990s, are still
taught in university courses.
They address the scope and definition of the fine-grained modules called classes in OO programs, and changes to classes arranged in an inheritance tree.
The claim is that when applied “properly”
they make code more extendable, logical, and easier to read.
I don’t include them in the courses I run, partly because they apply at
a lower level of software architecture than most of my class is about.
And partly because I don’t find them entirely convincing.
In system design, principles are not rules.
Understanding and applying principles is not always straightforward, and
can have downsides.
As one commentator said:
“You really have to take each
project as a separate project and trying to cram an ideal into every project is
going to hamper productivity and creativity in the end.”
So,
below, let me ask some questions about the SOLID principles that strike me.
And
quote some comments that seem convincing.
In short, the principles can be useful, but they can be counter-productive if you don't minimize complexities they may lead to.
I’ve added a brutal conclusion at the end.
COMMENTS AND QUESTION wrt
THIS SOURCE
https://betterprogramming.pub/solid-principles-simple-and-easy-explanation-f57d86c47a7f
Single Responsibility Principle
“A class should have one, and only one, reason to change.
One class should serve only one purpose.
All its methods and properties should work towards the same goal.
When a class serves multiple purposes or responsibilities, it should be made into a new class.”
Questions that strike me
This is vague. Are purpose, goal and responsibility the same or different?
All of them (goals, purposes and responsibilities) are recursively composable and decomposable.
You can always abstract from two somewhat cohesive responsibilities to one.
Ultimately you arrive at a responsibility for "customer relationship management" or "enterprise resource planning".
Conversely, decomposing responsibilities to the lowest conceivable level can create the complexities of too many classes and too much inter-object messaging.
Should one class be responsible for one attribute, one normalised entity, one aggregate entity, or one wider view of a database structure?
In short, mapping one component to one responsibility leaves lots of room for interpretation and debate.
Might the “S” be better read as Separate concerns, look for tight cohesion within a module and loose-coupling between modules?
Open-Closed Principle
“Entities should be open for extension, but closed for modification.
Software entities (classes, modules, functions, etc.) should be extendable without actually changing the contents of the class you’re extending.
If we could follow this principle strongly enough, it is possible to then modify the behavior of our code without ever touching a piece of the original code.”
Questions that strike me
Can I read this as: Don’t screw up other subtypes or clients of a supertype by modifying it when you add new subtype?
At the level of solution architecture I teach, composition (I prefer to say delegation) trumps inheritance.
So, does this second principle stack up outside of inheritance trees?
Adding new classes or modules rather than changings old ones may have short-term benefits.
In the long-term, it can mean code written for the first set of requirements becomes obscurely buried deep within newer code written for requirements that have been both extended and changed.
Surely, this is one way that code can become bloated, over-complex and incomprehensible?
Likely, one day, there will be a case for refactoring, or even throwing away, old code?
Liskov Substitution Principle
“Barbara Liskov and Jeannette Wing formulated the principle succinctly in a 1994 paper as follows:
Let φ(x) be a property provable about objects x of type T. Then φ(y) should be true for objects y of type S where S is a subtype of T.
The human-readable version repeats pretty much everything that Bertrand Meyer already has said, but it relies totally on a type-system:
1. Preconditions cannot be strengthened in a subtype.
2. Postconditions cannot be weakened in a subtype.
3. Invariants of the supertype must be preserved in a subtype.”
“Robert Martin made the definition smoother and more concise in 1996
Functions that use pointers of references to base classes must be able to use objects of derived classes without knowing it.
Or simply: Subclass/derived classes should be substitutable for their base/parent class.
It states that any implementation of an abstraction (interface) should be substitutable in any place that the abstraction is accepted.
Basically, it takes care that while coding using interfaces in our code, we not only have a contract of input that the interface receives,
but also the output returned by different classes implementing that interface; they should be of the same type.”
Questions that strike me
Can I read this as: Don’t screw up an operation in a supertype or interface by changing its specification in a subtype or implementation?
So, an operation (calculateArea) inherited from a class (Quadrangle) should meet the same specification when invoked on a subtype object (Square or Parallelogram).
It should not be more constrained by preconditions, or produce different results or effects.
Does this prevent a supertype operation being abstract and polymorphic - implemented differently in different subtypes?
E.g. “Add” is implemented in one subtype as a number-adding operation and in another as a text-concatenation operation?
Interface Segregation Principle
“A client should not be forced to implement an interface that it doesn’t use.
This rule means we should break our interfaces into many smaller ones, so they better satisfy the exact needs of our clients.
Similar to the Single Responsibility Principle, the goal is to minimize side consequences and repetition by dividing the software into multiple, independent parts”
Questions that strike me
Clearly OK if each client needs a different set of operations.
What if several clients need slightly different subsets of the operations currently in one interface?
What if one client wants to improve an operation (in their interface) that is currently replicated and used by another client in another interface?
Surely, multiple interfaces can increase the complexity and maintenance effort needed?
It has been proposed that smaller client-specific interfaces prevent clients becoming dependent on operations they don’t need.
But why would clients ever invoke operations they don’t need?
Dependency Inversion Principle
“High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
Or simply: Depend on abstractions, not on concretions.
By applying the Dependency Inversion Principle, the modules can be easily changed by other modules just changing the dependency module.
Any changes to the low-level module won’t affect the high-level module.
There’s a common misunderstanding that dependency inversion is simply another way to say dependency injection. However, the two are not the same.”
Questions that strike me
Does this simply mean to invoke operations in abstract interfaces rather in concrete classes (or other kinds of module)?
Surely, at least some changes to a module (high or low) must ripple up through an interface it?
COMMENTS WRT ANOTHER SOURCE
https://blog.ndepend.com/defense-solid-principles/
This defense of the principles isn’t entirely convincing.
And the comments below the article add to my doubts
José Arturo Cano says: January 2, 2019 at 7:51 pm
“I find your interpretations very compelling, but I’ve found so diverse interpretations of these principles on the web that I can say they are close to useless.
The idea of a software design principle IMHO, is to provide a base framework for developers to discuss software development best practices.
But when the advocators can’t even agree on what their
principles mean, it’s time to look for alternatives.
I also have found that people trying to follow these principles create extremely over-modularized architectures.
Mostly because they decompose simple implementations into even smaller modules, disperse over the project.
Which makes it close to impossible to discern the purpose
of these micro-modules in the context of the whole project.”
CWS says: May 10, 2019 at 2:49 am
“I would agree with Joel on this.
Things like Test Driven Design are good for old codebases where things don’t change very much.
Or you have this amazing client that knows exactly what they want
and things aren’t going to change at all
and the architect has sat down with the client and defined every class, every method, its parameters, what it returns
and the programmers just write unit tests around that, then they write the code…..
Wait….let’s just stop right there.
I’ve never had any client that knows know what they really want.
In 20 years of coding for employers and working for myself as a contractor.. not once.
It’s not just me – that is the real world.
Writing a bunch of unit tests before you’ve even written code is a waste of time.
Like Joel mentioned, you have all these unit tests, then things change one day.
So now you have to go edit all the unit tests to compensate for the changes.
THEN change your code so that the unit tests pass… then two days later the client changes their mind again –
or they want to add a new feature and that new feature suddenly impacts a big chunk of code, including any UI layouts…
so now you have to go change the unit tests AGAIN… and so on and so on.
How many man hours did you spend writing unit tests that really really don’t matter?
The same setup could be used against SOLID.
There’s just too many changes in a real world environment where features are adding, removed, edited to an ever evolving piece of software.
You really have to take each project as a separate
project and trying to cram an ideal into every project is going to hamper
productivity and creativity in the end.”
BRUTAL CONCLUSION
The SOLID principles apply firstly to classes arranged inheritance trees.
The role of inheritances trees in software is limited.
The principles can be applied more widely to the scope and definition of all classes in OO programs.
They can be useful, but they can be counter-productive if you don't minimize the complexities resulting from
· overly fine-grained modularization
· preserving superseded code forever
· maintaining multiple overlapping interfaces, and
· inserting interfaces between every pair of communicating modules.
S? I prefer Separate concerns, look for tight cohesion within a module and loose-coupling between modules.
O and L? These smack of a faith that inheritance trees are eternal or universal truths, which I am averse to.
I and D? These may over complexify a design.
I don't see SOLID as the foundation stone for design to meet the two, sometimes conflicting, aims I was taught
1. write the minimal amount of code to meet the requirements
2. write code you can understand and amend
What to say about scope the very much coarser grained components assigned to programmer and teams?
As per Conway's law (1968) a component should be no larger than the cognitive capacity of a person or a small team.
Think here of our sociology, of a hunter gatherer foraging party, which is typically 7 or 8 people.
While assessing a team's cognitive capacity is a way to size an application component, the dividing lines still need to be drawn well.
And for that, rules of thumb include designing a component to
· support and enable one user role, or
· maintain the data in one "bounded context”.
For the latter, see my article on issues with microservices.