- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
I've read a lot of articles about framework X versus Y, even wrote a blog article about the silliness of comparing Laravel with Symfony for example. The argument I hear most is that Symfony is a better match for enterprise applications. But you also have similar examples like comparing Vue.js with Angular. Now my question is what makes something an enterprise application and why does this effect the choice of framework? It should be more than 'enterprise requires more abstract, complicated code', as any application of any size benefits from 'clean code'.
What makes something 'enterprise'?
The word 'enterprise' is of course related to some big (multinational) company with lots of users/employees/customers. A very big chance is that they also have a very big web application or multiple web applications for automating as much as possible.
So we first have to see what makes an application 'big' as this is still very subjective. Non the less you will run into scalability issues when your application is written for an enterprise application..
- an application could be considered big because of a large number of lines of code.
- an application could be considered big because there are so many users active on the website.
- the company is so big that multiple team developers work on the same web application at the same time.
- an application could be considered big because of the number of connections to other (micro)services.
So to make a web application 'enterprise' level it needs to be able to scale with those things in mind.
Cross-cutting concerns
A large number of lines often means that cross-cutting concerns are very important. For example any HTTP request should do an authorization check if you are allowed to a specific action. Luckily most frameworks do support cross-cutting concern. For example an authorization check is implemented with either middleware or kernel request events being dispatched. Without it maintaining is practically impossible. It also makes it easier for newer developers not to reinvent the wheel again.
Also in case you think it's good to avoid making a large application by making everything a microservice: to make this scalable you need cross-cutting concerns how an API communicates with an other API so any microservice will function the same way and other developers do not need to know how to handle this.
D.R.E. = Don't Repeat Execution
This term is how I call it as a pun on DRY programming. An enterprise application has many users, so it should be able to handle maybe even 200-3000 requests per second. We can handle a lot by horizontal scaling with running multiple servers of your application (which also requires knowledge in how to share sessions for example), but it's still important the application is only doing things that are actually relevant.
When you do that many requests, you want to have the framework initialization time on every HTTP request short. A developer has to be aware that in development it's normal to change classes or configuration, but in production this only happens on deployment of a new version. If you use a framework on enterprise level, this has a few consequences to avoid having to repeat execution of trivial stuff again and again.
A framework should not have to re-register all it's services to the service container on every request. Even though the code is not slow, the moment your codebase becomes larger, your application will do this on every request. Laravel always re-executes service provider classes on every request, except for a rarely used cache option. Symfony generates a highly optimized service container class with all registered services. To avoid cache invalidation problems, it is advised to use a rigid form of service registration in the form of YAML or XML files for registering Symfony services.
To make matters worse, the same can be said in getting services for the service containers. Laravel will always do the autowiring on every request with reflection, but since the classes never change this is a useless calculation on every request.
Article edit: recently I found an article of someone benchmarking Symfony and Laravel in resolving services from the container. Symfony was 1108 times faster and also with less memory usage! See that article.
Also there is PHP extension called opcache which is a must-have for enterprise production code. Because Symfony outputs the generated file as a php file, the opcache extension will also make the source code only parse once to machine code. In a nut-shell you can say this part in Symfony has become a JIT implementation :)
In fact the fastest solution in getting the application configuration is actually writing cached PHP files in an application! It's faster than usage of json_decode or unserialize on a static file you read with file_get_contents.
Of course the downside is that generating PHP files to avoid repeat recalculation makes the code rigid by itself. It took quite some time before you could apply environment variables dynamically in a Symfony application for example and there are still a few cases where the caching gives issues.
So in a nutshell, look for code like this that is being called on every request that should not be executed in every request, because they don't change that much:- Validating a static configuration file
- Registering all the services to the service container
- Using the Reflection API on your codebase
- Parsing the same source code again and again in an interpreted language.
- Parsing static DSL's everytime.
- Any pure function that takes a lot of time to recalculate.
- Validating a static configuration file
- Registering all the services to the service container
- Using the Reflection API on your codebase
- Parsing the same source code again and again in an interpreted language.
- Parsing static DSL's everytime.
- Any pure function that takes a lot of time to recalculate.
Too many possible solutions for a generic problem
Sometimes the framework will provide different options how to do something generic in a controller, for example getting the current request or checking for authorization. This looks convenient as a developer can pick the one he preferred most, but there are two issues with offering multiple solutions for the same problem. For example these are all possible solutions to get the Request object in Laravel inside a controller action:
class ExampleController {
public function __construct(private readonly Request $request) {
}
public function index(Request $request) {
$request = $this->request;
$request = request();
$request = app('request');
$request = app(Request::class);
$request = App::make('request');
$request = App::make(Request::class);
}
}
Compare that with Symfony:
class ExampleController {
public function __construct(private readonly RequestStack $requestStack) {
}
public function index(Request $request) {
$request = $this->requestStack->getMainRequest();
}
}
Since the RequestStack service is only being used in very particular niche situations, it can be easily said that in 99% of the cases we will provide the request object as method argument.
- first of all it makes the code less 'clean' and feels more unstructured if every controller is written with a different solution for the same problem. While the definition of clean code is subjective, any new programmer would see very easily that no strict structure is applied and will introduce his preferred structure in the code.
- When updating vendor libraries and it contains a BC break, you will have to manually search all possible variations and apply these changes to the codebase. If there is only one way to do this, then you will have an easier time using a tool like Rectorphp or phpstorm to refactor all code automatically.
Error handling and logging
An other thing that a framework should do if it wants to handle enterprise applications is throwing and logging anything that pops up. For example in the example mentioned above here, Symfony will throw an error if you ask for the request object in a console command as there is no request in a console command. Laravel will do the default auto-wiring stuff and comes up with new Request().
The bigger the program the less you want to debug an entire codebase to find out an error is silently discarded. or a fallback that makes no sense in a specific context.
You also want to avoid to apply error handling everywhere and let this be a cross-cutting concern to handle errors. Exceptions could be misused as goto statements between different files! In case you do need it make sure you rethrow the same exception or even better: chain the exception:
Should be used only if we do not care about the stacktrace here.
|
This is preferred.
|
Another use case is that in big applications it's quite normal to have around hundreds of environment variables. You do want to know for sure all of them are set properly and in the correct format. Again Laravel will just send null on missing environment variables, so you will only get the misconfiguration the moment it's being called incorrectly.
One large class or many small classes?
In large applications SOLID is highly recommended. In particular every class should have just a single responsibility. Use of design patterns also help in getting to understand what a particular class does.
I know inexperienced programmers think that having to click through the code is more annoying with multiple classes, but they are too much focused in what order something is being executed. An enterprise application is too big to follow like this anyway and you have to look for structures.
- For example if a class is called BasketItemFactory then I know that it follows the Factory design pattern and that it's responsibility is creating a basket item. So everywhere I need to make basket items I need to have the BasketItemFactory as dependency.
- If something is called CacheDecorator and it implements SerializerInterface, then I know it follows the Decorator design pattern and adds caching logic to a Serializer service.
Again if we compare Symfony's validator service with the Laravel validator service we can see Laravel's Validator class does everything. It registers all custom rules, parses rule strings and does the actual validation calls. If I want to override any logic, all I can do is use inheritance, because the Validator class does too much.
Symfony has one Validator service which is following the Facade design pattern. Every part of the validator component has it's own class and can also be replaced with your own implementation.
Testing
Enterprise applications require quite a lot of testing. It helps when you have multiple teams work on the same project and any bug in production will result in data corruption even if it was live for only 5 minutes. In general most people prefer writing integration tests. And if an application becomes bigger, the performance of integration tests will be huge downside. In those applications test suites can take 10-20 minutes of time for execution and it actually hinders development on the application. In the end you will have to write unit tests over integration tests for testing specific use cases.
In Laravel it's very common to use laravel facades and fake facade services, but the problem here is that those facades only work if you have a running Laravel application. Considering that these methods are static methods you have no other option then actually using them with a database. To me it also feels weird: why do I need a working web application to test a class that tells me a domain name is available. Luckily facades are optional and it's perfectly valid to use dependency injection in your entire application.
Cache stampede
I don't think I'll say anything new if an enterprise requires caching for performance reasons.
But one thing that is often forgotten is and very common in enterprise applications is a cache stampede. The problem occurs when the caching has expired when a lot of users are active on your site. Since the cache value is expired any request will do the calculation again. If it requires a lot of resources, this will kill the server really quick.
A simple solution would be to randomize recalculating the value before the cache value is expired and make the chance for recalculating bigger the more you get close to the expiration date. This way only a hard cache clear could take down the website. It's called "Probabilistic early expiration". And guess what: Symfony does this by default. And Laravel does not.
Conclusion
While it looks like I'm bashing Laravel here, the assumption that Symfony is more suited for enterprise applications is quite a valid statement. But no matter which framework you use, you can at least anticipate on how to handle enterprise applications:
- Cross-cutting concerns are very important as you need to simplify for other developers as much as possible!
- Test everything! Regressions are easy in a large codebase. And make sure your classes can always be tested without a working application or you will have a very slow test suite.
- Anything unexpected should result in an error and never silently discarded as it makes debugging the application absolutely terrible.
- Check how long your framework takes on initializing in production mode. No matter how much you scale it horizontally by adding more server resources, the speed can not be increased much if the framework initialization speed slows down for every written line of code.
- A rigid, very structured codebase is required in an enterprise applications as it makes the code more predictable and easier to refactor. Sometimes forcing a solution or even introduce concepts like DSL's or yaml/xml schema's is required to limit the scope.
- If something never changes in production, make sure it is not being recalculated/evaluated on every request!
- Learn design patterns! Not only will it lead to a more structured solution, it will also make other developers tell you the structure better. Never think imperatively in large applications.
Comments
Post a Comment