<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Pixeljets]]></title><description><![CDATA[Usability & Technology Advocates. Stories about modern PHP & Javascript.]]></description><link>https://pixeljets.com/blog/</link><generator>Ghost 2.19</generator><lastBuildDate>Sun, 07 Mar 2021 06:30:24 GMT</lastBuildDate><atom:link href="https://pixeljets.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Clickhouse as an alternative to ElasticSearch and MySQL, for log storage and analysis, in 2021]]></title><description><![CDATA[<p>In 2018, I've written an <a href="https://pixeljets.com/blog/clickhouse-as-a-replacement-for-elk-big-query-and-timescaledb/">article about Clickhouse</a>, this piece of content is still pretty popular across the internet, and even was translated a few times. More than two years have passed since, and the pace of Clickhouse development <a href="https://github.com/ClickHouse/ClickHouse/pulse/monthly">is not slowing down</a>: 800 merged PRs just during last month!</p>]]></description><link>https://pixeljets.com/blog/clickhouse-vs-elasticsearch/</link><guid isPermaLink="false">603bf39ad8be23287cd9210c</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Tue, 02 Mar 2021 15:38:07 GMT</pubDate><content:encoded><![CDATA[<p>In 2018, I've written an <a href="https://pixeljets.com/blog/clickhouse-as-a-replacement-for-elk-big-query-and-timescaledb/">article about Clickhouse</a>, this piece of content is still pretty popular across the internet, and even was translated a few times. More than two years have passed since, and the pace of Clickhouse development <a href="https://github.com/ClickHouse/ClickHouse/pulse/monthly">is not slowing down</a>: 800 merged PRs just during last month! This didn't blow your mind? Check out the full changelog, for example for 2020: <a href="https://clickhouse.tech/docs/en/whats-new/changelog/2020/">https://clickhouse.tech/docs/en/whats-new/changelog/2020/</a> The description of just new features for each year may take an hour to go through.</p><p>For the sake of honest comparison, <a href="https://github.com/elastic/elasticsearch/pulse/monthly">ElasticSearch repo has jaw-dropping 1076 PRs merged for the same month</a>, and in terms of features, their pace is <em>very</em> impressive, as well!</p><p>We are using Clickhouse for log storage and analytics in <a href="https://ApiRoad.net">ApiRoad.net</a> project (which is <a href="https://apiroad.net">an API marketplace where developers sell their APIs</a>, still in active development) and we are happy with the results so far. As an API developer myself, I know how important is the observability and analysis of HTTP request/response cycle to maintain the quality of service and quickly detect bugs, this is especially true for pure API service. <em>(If you are an API author and want to utilize ApiRoad analytics &amp; billing plaftform to sell API subscriptions, drop me a message at <a href="mailto:contact@apiroad.net">contact@apiroad.net</a> with your API description – I will be happy to chat!)</em></p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2021/02/demo2--1-.gif" class="kg-image"></figure><!--kg-card-end: image--><p>We are also using ELK (ElasticSearch, Logstash, filebeat, Kibana)  stack on other projects, for very similar purposes - getting http and mail logs, for later analysis and search via Kibana.</p><p>And, of course, we use MySQL. Everywhere!</p><p>This post is about the major reasons why we chose Clickhouse and not ElasticSearch (or MySQL) as a storage solution for ApiRoad.net essential data - request logs (Important note: we still use MySQL there, for OLTP purposes).</p><h2 id="1-sql-support-json-and-arrays-as-first-class-citizens-">1. SQL support, JSON and Arrays as first class citizens.</h2><p>SQL is a perfect language for analytics. I love SQL query language and SQL schema is a perfect example of boring tech that I recommend to use as a source of truth for all the data in 99% of projects: if the project code is not perfect, you can improve it relatively easily if your database state is strongly structured. If your database state is a huge JSON blob (NoSQL) and no-one can fully grasp the structure of this data, this refactoring usually gets much more problematic.</p><p><br>I saw this happening, especially in older projects with MongoDB, where every new analytics report and every new refactoring involving data migration is a big pain. Starting such projects is fun – as you don't need to spend your time carefully designing the complete project schema, just "see how it goes" – but maintaining them is not fun!</p><p><br>But, it is important to note that this rule of thumb - "use strict schema" - is not that critical for log storage use cases. That's why ElasticSearch is so successful, it has many strong sides, and flexible schema.</p><p><br>Back to JSON: traditional RDBMS are still catching up with NoSQL DBMS in terms of JSON querying and syntax, and we should admit JSON is a very convenient format for dynamic structures (like log storage).</p><p><br>Clickhouse is a modern engine that was designed and built when JSON was already a thing (unlike MySQL and Postgres), and Clickhouse does not have to carry the luggage of backward compatibility and strict SQL standards of these super-popular RDBMS, so Clickhouse team can move fast in terms of features and improvements, and they indeed move fast. Developers of Clickhouse had more opportunities to hit a sweet balance between strict relative schemas and JSON flexibility, and I think they did a good job here. Clickhouse tries to compete with Google Big Query and other big players in the analytics field, so it got many improvements over "standard" SQL, which makes its syntax a killer combo and in a lot of cases much better than you get in traditional RDBMS, for analytics and various calculation purposes.</p><p>Some basic examples:</p><p>In MySQL, you can extract json fields, but complex JSON processing, like joining relational data on JSON data, became available only recently, <a href="https://mysqlserverteam.com/json_table-the-best-of-both-worlds/">from version 8 with JSON_TABLE function</a>. In PosgreSQL, the situation is even worse - <a href="https://stackoverflow.com/a/61732970/1132016">no direct JSON_TABLE alternative until PostgreSQL 12</a>!</p><p>Compare it to Clickhouse JSON and related arrays feature set - it is just miles ahead. Links:</p><ul><li><a href="https://clickhouse.tech/docs/en/sql-reference/statements/select/array-join/">arrayJoin</a></li><li><a href="https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/grouparray/">groupArray</a></li><li><a href="https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array-map">arrayMap</a></li><li><a href="https://clickhouse.tech/docs/en/sql-reference/functions/array-functions/#array-filter">arrayFilter</a></li></ul><p>These are useful in a lot of cases where you would use <code>generate_series()</code> in PostgreSQL. A concrete example from ApiRoad: we need to map requests amount on chart.js timeline. If you do regular <code>SELECT .. group by day</code>, you will get gaps if some days did not have any queries. And we don't need gaps, we need zeros there, right? This is exactly where <code>generate_series()</code> function is useful in PostgreSQL. In MySQL, <a href="https://ubiq.co/database-blog/fill-missing-dates-in-mysql/">the recommendation is to create stub table with calendar and join on it...</a> not too elegant, huh?</p><p>Here is how to do it in ElasticSearch: <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_missing_value_2">https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_missing_value_2</a></p><p>Regarding the query language: I am still not comfortable with verbosity and approach of ElasticSearch Lucene syntax, HTTP API, and all these  json structures that you need to write to retrieve some data. SQL is my preferred choice.</p><p>Here is the Clickhouse solution for dates gap filling:</p><!--kg-card-begin: code--><pre><code>SELECT a.timePeriod as t, b.count as c from (
	with (select toUInt32(dateDiff('day', [START_DATE], [END_DATE])) ) 
		as diffInTimeUnits
                
	select arrayJoin(arrayMap(x -&gt; (toDate(addDays([START_DATE], x))), 			range(0, diffInTimeUnits+1))) as timePeriod ) a
            
LEFT JOIN 
            
	(select count(*) as count, toDate(toStartOfDay(started_at)) as timePeriod from logs WHERE 
		[CONDITIONS]
		GROUP BY toStartOfDay(started_at)) b on a.timePeriod=b.timePeriod</code></pre><!--kg-card-end: code--><p>Here, we generate virtual table via lambda function and loop, and then left join it on  results from logs table grouped by day.</p><p>I think <code>arrayJoin</code> + <code>arrayMap</code> + <code>range</code> functions allow more flexibility than <code>generate_series()</code> from Postgres or ElasticSearch approach. There is also <code><a href="https://clickhouse.tech/docs/en/sql-reference/statements/select/order-by/#orderby-with-fill">WITH FILL</a></code> keyword available for a more concise syntax.</p><h2 id="2-flexible-schema-but-strict-when-you-need-it">2. Flexible schema - but strict when you need it</h2><p>For log storage tasks, the exact data schema often evolves during project lifetime, and ElasticSearch allows you to put huge JSON blob into index and later figure out the field types and indexing part. Clickhouse allows to use the same approach. You can put data to JSON field and filter it relatively quickly, though it won't be quick on terabyte scale. Then, when you see you often need fast query execution on specific data field, you add materialized columns to your logs table, and these columns extract values from existing JSON on-the-fly. This allows much faster queries on terabytes of data.</p><p>I recommend this video from Altinity on the topic of JSON vs Tabular schema for log data storage:</p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/pZkKsfr8n3M?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><p></p><h2 id="3-storage-and-query-efficiency">3. Storage and Query Efficiency</h2><p>Clickhouse is very fast in SELECTs, <a href="https://pixeljets.com/blog/clickhouse-as-a-replacement-for-elk-big-query-and-timescaledb/">this was discussed in the previous article</a>. </p><p><a href="https://youtu.be/pZkKsfr8n3M?t=2479">What is interesting, there is a piece of evidence that <strong>Clickhouse can be 5-6 times more efficient in storage, comparing to ElasticSearch, while also being literally an order of magnitude faster in terms of queries.</strong></a><strong> </strong><a href="https://habr.com/ru/company/mkb/blog/472912/"><strong>Another one (in Russian)</strong></a></p><p>There are no direct benchmarks, at least I could not find any, I believe because Clickhouse and ElasticSearch are very different in terms of query syntax, cache implementations, and their overall nature.</p><p>If we talk about MySQL, any imperfect query, missing index, on a table with mere 100 million rows of log data can make your server crawl and swap, MySQL is not really suited for large-scale log queries. But, in terms of storage, compressed InnoDB tables are surprisingly not that bad. Of course, it's much worse in terms of compression comparing to Clickhouse (sorry, no URLs to benchmarks to support the claim this time), due to its row-based nature, but it still often manages to reduce cost significantly without a big performance hit. We use compressed InnoDB tables for some cases for small-scale log purposes.</p><h2 id="4-statistics-functions">4. Statistics functions</h2><p>Getting median and .99 percentile latency of 404 queries is easy in Clickhouse:</p><!--kg-card-begin: code--><pre><code>SELECT count(*) as cnt, 
  quantileTiming(0.5)(duration) as duration_median, 
  quantileTiming(0.9)(duration) as duration_90th, 
  quantileTiming(0.99)(duration) as duration_99th
  FROM logs WHERE status=404</code></pre><!--kg-card-end: code--><p>Notice usage of <code>quantileTiming</code> function and how <a href="https://javascript.info/currying-partials">currying</a> is elegantly used here. Clickhouse has generic <code>quantile</code> function! But <code>quantileTiming</code> is <a href="https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/quantiletiming/#quantiletiming">optimized for working with sequences which describe distributions like loading web pages times or backend response times</a>.</p><p>There are more than that. Want weighted arithmetic mean? Want to calculate linear regression? this is easy, just use specialized function.</p><p>Here is a full list of statistics functions of Clickhouse:</p><p><a href="https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/">https://clickhouse.tech/docs/en/sql-reference/aggregate-functions/reference/</a></p><p>Most of these are problematic to get in MySQL.</p><p>ElasticSearch is much better in this than MySQL, it has both quantiles and weighted medians, but it still does not have linear regression. </p><h2 id="5-mysql-and-clickhouse-tight-integration">5. MySQL and Clickhouse tight integration</h2><p>MySQL and Clickhouse has integrations on multiple levels, which make it easy to use them together with minimum of data duplication:</p><ul><li><a href="https://clickhouse.tech/docs/en/sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources/#dicts-external_dicts_dict_sources-mysql">MySQL external dicts</a></li><li><a href="https://clickhouse.tech/docs/en/engines/database-engines/materialize-mysql/#materialize-mysql">MySQL database replica inside Clickhouse (via binlog)</a></li><li><a href="https://clickhouse.tech/docs/en/engines/database-engines/mysql/">MySQL database engine</a> - similar as previous one but dynamic, without binlog</li><li><a href="https://clickhouse.tech/docs/en/sql-reference/table-functions/mysql/">MySQL table function</a> to connect to MySQL table in specific SELECT query </li><li><a href="https://clickhouse.tech/docs/en/engines/table-engines/integrations/mysql/">MySQL table engine</a> to describe specific table statically in CREATE TABLE statement</li><li><a href="https://clickhouse.tech/docs/en/interfaces/mysql/">Clickhouse can speak MySQL protocol</a></li></ul><p>I can't say for sure how fast and stable dynamic database engines and table engines work on JOINs, this definitely requires benchmarks, but the concept is very appealing - you have full up-to-date clone of your MySQL tables on your Clickhouse database, and you don't have to deal with cache invalidation and reindexing.</p><p>Regarding using MySQL with Elasticsearch, my limited experience says that these two techonologies are just too different and my impression is that they are speaking foreign languages, and do no play "together", so what I usually did is just JSONify all my data that I needed to index in ElasticSearch, and send it to ElasticSearch.  Then, after some migration or any other UPDATE/REPLACE happen on MySQL data, I try to figure out the re-indexing part on Elasticseach side. <a href="https://www.elastic.co/blog/how-to-keep-elasticsearch-synchronized-with-a-relational-database-using-logstash">Here is an article of the Logstash powered approach to sync MySQL and ElasticSearch</a>. I should say I don't really enjoy Logstash for it's mediocre performance, and RAM requirements, and since it is another moving part which can break. This syncing and re-indexing task is often a significant stop factor for us to use Elasticsearch in simple projects with MySQL.</p><p></p><h2 id="6-new-features">6. New Features</h2><p>Want to attach S3 stored CSV and treat it as table in Clickhouse? <a href="https://clickhouse.tech/docs/en/engines/table-engines/integrations/s3/">Easy</a>.</p><p>Want to update or delete log rows to be compilant with GDPR? Now, this is easy!</p><p>There was no clean way to delete or update data in Clickhouse in 2018 when my first article was written, and it was a real downside. Now, it's not an issue anymore. Clickhouse leverages custom SQL syntax to delete rows: </p><!--kg-card-begin: code--><pre><code>ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr</code></pre><!--kg-card-end: code--><p>This is implemented like this to be explicit that deleting is still a pretty expensive operation for Clickhouse (and other columnar databases) and you should not do it every second on production.</p><h2 id="7-cons">7. Cons</h2><p>There are cons for Clickhouse, comparing to ElasticSearch. First of all, if you build internal analytics for log storage, you do want to get the best GUI tool out there. And Kibana is good nowadays for this purpose when you compare it to Grafana (at least, this point of view is very popular on the Internet, Grafana UI is not that slick sometimes). And you have to stick to Grafana or Redash if you use Clickhouse. <a href="https://github.com/enqueue/metabase-clickhouse-driver">(Metabase, which we adore, also got Clickhouse support!)</a></p><p>But, in our case, in ApiRoad.net project, we are building customer-facing analytics, so we have to build analytics GUI from scratch, anyways (we are using a wonderful stack of Laravel, Inertia.js, Vue.js, and Charts.js to implement the customer portal, by the way).</p><p>Another issue, related to the ecosystem: the selection of tools to consume, process data and send them to Clickhouse is somewhat limited. For Elasticsearch, there are Logstash and filebeat, tools native to Elastic ecosystem, and designed to work fine together. Luckily, Logstash can also be used to put data to Clickhouse, this mitigates the issue. In ApiRoad, we are using our own custom-built Node.js log shipper which aggregates logs and then sends them to Clickhouse in a batch (because Clickhouse likes big batches and does not like small INSERTs).</p><p>What I don't like in Clickhouse is also weird naming of some functions, which are there because Clickhouse was created for Yandex.Metrika (Google Analytics competitor), e.g. visitParamHas() is a function to check if a key exists in JSON. Generic purpose, bad non-generic name. I should mention that there is a bunch of fresh JSON functions with good names: e.g. JSONHas(), with one interesting detail: they are using <a href="https://github.com/simdjson/simdjson">different JSON parsing engine</a>, more standards-compliant but a bit slower, as far as I understand.</p><p></p><p></p><h2 id="conclusion">Conclusion</h2><p>ElasticSearch is a very powerful solution, but I think its strongest side is still huge setups with 10+ nodes, used for large-scale full-text search and facets, complex indexing, and score calculation – this is where ElasticSearch shines. When we talk about time-series and log storage, my feeling is there are better solutions, and Clickhouse is one of them. ElasticSearch API is enormous, and in a lot of cases it's hard to remember how to do one exact thing without copypasting the exact HTTP request from the documentation, it just feels "enterprisy" and "Java-flavored". Both Clickhouse and ElasticSearch are memory hungry apps, but RAM requirements for minimal Clickhouse production installation is 4GB, and for ElasticSearch it is around 16GB. I also think Elastic team focus is getting pretty wide and blurred with <a href="https://www.elastic.co/what-is/elasticsearch-machine-learning">all the new amazing machine-learning features they deploy</a>, my humble opinion is that, while these features sound very modern and trendy, this enormous feature set is just impossible to support and improve, no matter how many devs and money you have, so ElasticSearch more and more gets into "Jack of all trades, master of none" category for me. Maybe I am wrong.</p><p>Clickhouse just feels different. Setup is easy. SQL is easy. Console client is wonderful. Everything just feels so light and makes sense, even for smaller setups, but rich features, replicas, and shards for terabytes of data are there when you need it.</p><p></p><h2 id="good-external-links-with-further-info-on-clickhouse-">Good external links with further info on Clickhouse:</h2><p><a href="https://altinity.com/blog/">Altinity Blog</a></p><p><a href="https://blog.luisico.net/2019/03/17/testing_clickhouse_as_logs_analysis_storage/">https://blog.luisico.net/2019/03/17/testing_clickhouse_as_logs_analysis_storage/</a></p><!--kg-card-begin: embed--><figure class="kg-card kg-embed-card"><iframe width="200" height="113" src="https://www.youtube.com/embed/zbjub8BQPyE?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></figure><!--kg-card-end: embed--><p>UPD: <a href="https://news.ycombinator.com/item?id=26316401">this post hit top#1 on HackerNews, useful comments there, as usual!</a></p><p>Best comments:</p><blockquote>ClickHouse is incredible. It has also replaced a large, expensive and slow Elasticsearch cluster at Contentsquare. We are actually starting an internal team to improve it and upstream patches, email me if interested!</blockquote><blockquote>I'm happy that more people are "discovering" ClickHouse. ClickHouse is an outstanding product, with great capabilities that serve a wide array of big data use cases. It's simple to deploy, simple to operate, simple to ingest large amounts of data, simple to scale, and simple to query. We've been using ClickHouse to handle 100's of TB of data for workloads that require ranking on multi-dimensional timeseries aggregations, and we can resolve most complex queries in less than 500ms under load.</blockquote><p></p><p>Also from HN:</p><p><a href="https://eng.uber.com/logging/">It turns out Uber just switched from ELK to Clickhouse for log analytics, read more on their writeup.</a></p>]]></content:encoded></item><item><title><![CDATA[inWidget proxified: Free Instagram widget for your website, in 2021]]></title><description><![CDATA[<p>For one of my projects, I needed a widget which would render posts from an Instagram hashtag. It turned out to be very cumbersome to implement nowadays, because Instagram shut down its legacy API in 2020 and now the developer needs to go through a real nightmare to get approved</p>]]></description><link>https://pixeljets.com/blog/free-instagram-widget-for-your-website-in-2021/</link><guid isPermaLink="false">6034f0d6d8be23287cd920b0</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Tue, 23 Feb 2021 12:54:15 GMT</pubDate><content:encoded><![CDATA[<p>For one of my projects, I needed a widget which would render posts from an Instagram hashtag. It turned out to be very cumbersome to implement nowadays, because Instagram shut down its legacy API in 2020 and now the developer needs to go through a real nightmare to get approved for Instagram Official API, which still may not allow hashtag parsing.  </p><p>Another way would be to go for paid solutions, like elfsight widget ( <a href="https://elfsight.com/instagram-feed-instashow/">https://elfsight.com/instagram-feed-instashow/</a> ) but their artificial restriction on amount of views on the free and $5 plan is just crazy.</p><p>So, I've found a nice alternative, which is much cheaper (literally, free for fair use), does not require Instagram developer account, has open source code, and is very customizable: <a href="https://github.com/restyler/inwidget">https://github.com/restyler/inwidget</a></p><p>The demo website with sample widgets: <a href="https://inwidget.apiroad.net/">https://inwidget.apiroad.net/</a></p><p>This library still requires subscription for a cloud proxy (the same RapidAPI solution I told you about on <a href="https://pixeljets.com/blog/scraping-instagram-in-2021/">previous post</a>),  but for the end user, this is not an issue - just cache the result on the PHP side, refresh the cache every 24 hours, and  easily stay under the free plan of the API!</p><p>The best part of the inWidget proxified library, besides its stable work, is that the template of a widget is just plain html and php, so it is easy to customize the output, even if for a newbie in programming.</p>]]></content:encoded></item><item><title><![CDATA[Best way to daemonize node.js process in 2021: forever, pm2, nodemon, docker, supervisor, systemd and what to choose]]></title><description><![CDATA[<p>During my development career I used a lot of different solutions to daemonize processes (mostly, node.js scripts), and I decided to do a quick writeup with very short description of each approach to help fellow developers to choose. </p><h2 id="1-forever">1. Forever</h2><p><strong>URL:</strong> <a href="https://github.com/foreversd/forever">https://github.com/foreversd/forever</a></p><p><strong>Github stars:</strong> 13.</p>]]></description><link>https://pixeljets.com/blog/using-supervisorctl-for-node-processes-common-gotchas/</link><guid isPermaLink="false">60068543d614490a1cb39536</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Wed, 20 Jan 2021 08:27:00 GMT</pubDate><content:encoded><![CDATA[<p>During my development career I used a lot of different solutions to daemonize processes (mostly, node.js scripts), and I decided to do a quick writeup with very short description of each approach to help fellow developers to choose. </p><h2 id="1-forever">1. Forever</h2><p><strong>URL:</strong> <a href="https://github.com/foreversd/forever">https://github.com/foreversd/forever</a></p><p><strong>Github stars:</strong> 13.2k</p><p><strong>Pros:</strong> super simple and minimal.  </p><p><strong>Cons:</strong> not active in development. Heavily tied to node.js eco system. README of the project recommends to use pm2 or nodemon.</p><p><strong>Conclusion:</strong> </p><p>This is probably not a good solution for new projects, though it's pure JS and is pretty minimal, which I like.</p><h2 id="2-pm2">2. PM2</h2><p><strong>URL:</strong> <a href="https://github.com/Unitech/pm2">https://github.com/Unitech/pm2</a></p><p><strong>Github Stars: </strong>34k</p><p><strong>Pros: </strong></p><ul><li>A lot of features. </li><li>Specializes in Node.js.</li><li>Zero downtime reload feature is a big one!  </li></ul><p><strong>Cons:</strong> </p><ul><li>Big amount if features sometimes mean more cognitive overhead to run when you need super-basic setup, also more moving parts and higher chance that something may break. </li><li>Is actively monetized so the solution includes paid features.  For example, metrics is a paid feature.</li><li>Tied to node.js eco system.</li></ul><p><strong>Conclusion:</strong>  </p><p>Jack-of-all-trades. This is a very good solution if you develop only node.js apps, like a lot of features out of the box, and are okay that it way try to monetize on you.</p><h2 id="3-nodemon">3. Nodemon</h2><p><strong>URL:</strong> <a href="https://github.com/remy/nodemon/">https://github.com/remy/nodemon/</a></p><p><strong>Github Stars: </strong>21.8k</p><p><strong>Pros:</strong></p><ul><li>Boring tech: old &amp; proven</li><li>Unix-like simplicity: does one thing and does it right</li></ul><p><strong>Cons:</strong></p><ul><li>This is not exactly  intended to be used in production. </li><li>Tied to node.js eco system</li></ul><p><strong>Conclusion:</strong> </p><p>I use nodemon every day, but not for production purposes, since its main purpose is to do reload when you save a file, so your code changes are applied to living process. But I know some developers use if for production as well. I would say you should use it if you develop only node apps and you liked forever.js brutal simplicity.</p><h2 id="4-supervisor">4. Supervisor</h2><p><strong>URL: </strong><a href="https://github.com/Supervisor/supervisor">https://github.com/Supervisor/supervisor</a></p><p><strong>Github Stars: </strong>6.6k</p><p><strong>Pros:</strong> </p><ul><li>generic: useful not just for node.js - official way to daemonize Laravel queues</li><li>feature-rich but in a good way - no excess or paid features</li><li>convenient to use</li><li>boring tech</li></ul><p><strong>Cons: </strong></p><ul><li>Python (great language but not exactly performant)</li><li>Not very trendy on Github (because it is boring, right?)</li></ul><p><strong>Conclusion: </strong></p><p>Supervisor is my current favorite. Mostly because it's generic enough to work with Laravel and node.js processes at the same time, has basic logging out of the box, and has multiple processes setup, too. You don't need big performance for this kind of software, at the same time developer ergonomics of Supervisor is just great.</p><h2 id="5-docker">5. Docker</h2><p><strong>URL</strong>: <a href="https://github.com/docker">https://github.com/docker</a></p><p>Docker is definitely not a process manager, and is a too big topic for this post, but it solves a lot of things that we used to solve using process managers, in a beautiful way, so take a look at it when you have time (a spare week.. or two).</p><p><strong>Pros:</strong></p><ul><li>Is very powerful and has separate ecosystem</li><li>Huge momentum: everyone is dockerizing everything now</li><li>Generic: can daemonize your node.js process or your database</li></ul><p><strong>Cons:</strong></p><ul><li>It is a separate world of complexity and overhead. If you want to quickly prototype and daemonize small JS script this is an overkill</li></ul><p><strong>Conclusion:</strong></p><p>A lot of people say that PM2 is not needed anymore because of Docker. </p><p>I use Docker for a lot of things, but its cognitive overhead is significant. Definitely take a look at it, anyways.</p><p>I've spent tens of hours on <strong>Docker </strong>recently, and I really like the concept, I use it in production pretty successfully, too, but it definitely requires a lot of time and context switching to write Dockerfiles,  build images and test everything - this is getting especially tough when you switch back and forth between 3-4 small projects which quickly evolve. I also don't like the fact that Docker eats a lot of CPU on my fresh Macbook 16" even when it does nothing. VS code remote mitigates this, but not completely, it  is still not perfect for active development. </p><p>The main selling point of Docker is that it allows you to build 100% reproducible environment. <strong>This is a precious gift when you work in a team, but for sole developer in a side project, this is not always needed, and remember: you have to pay for Docker with your time and context switching, and I felt that it was regularly breaking <a href="https://stackoverflow.blog/2018/09/10/developer-flow-state-and-its-impact-on-productivity/">my flow</a> in smaller projects.</strong></p><p>So, I definitely use Docker, but I prefer to use it for other guys projects, to run it on my server, or for my projects which are pretty stable and has multiple developers working on it. In this case, you get all the pros, and almost no cons :)</p><p></p><h2 id="6-systemd">6. systemd</h2><p>SystemD is a process manager that you don't need to install (okay, this is true only if you use Ubuntu or Debian - but it is true for me), and it's the best thing about it.</p><p><strong>Pros:</strong></p><ul><li>Is already installed on your droplet</li><li>Fast </li></ul><p><strong>Cons:</strong></p><ul><li>Developer ergonomics is not perfect: I need to google every time to find out where should I put my systemd init file and how to apply it so it works fine. </li></ul><p>I've been using systemd for a few years, but no I am gradually switching back to supervisor, because it is very similar, but is more feature rich, and it provides better developer ergonomics in some parts.  The config file for systemd can be generated via online services like <a href="https://mysystemd.talos.sh/">https://mysystemd.talos.sh/</a> and <a href="https://techoverflow.net/2019/03/11/simple-online-systemd-service-generator/">https://techoverflow.net/2019/03/11/simple-online-systemd-service-generator/</a></p><p>Systemd is a good, generic and minimalistic approach if you work on Linux-based systems without Docker setup, but it was not working perfectly with Laravel multiple workers ( <a href="https://github.com/laravel/ideas/issues/1570#issuecomment-475499303">https://github.com/laravel/ideas/issues/1570#issuecomment-475499303</a> ) so I decided to replace it with Supervisor as my go-to approach.</p><p></p><h2 id="my-approach-to-daemonizing-node-js-processes">My approach to daemonizing node.js processes</h2><p>I try to choose and stick to one process manager on all my new projects, if it is possible, to lessen the cognitive overhead when you log in into some production server and try to quickly debug &amp; restart some process.</p><p>Here is what I use currently for all new projects (in 2021):</p><ul><li><strong>supervisor</strong> for production</li><li><strong>nodemon</strong> for development</li></ul><p>This was a natural choice because I develop Laravel and Node.js apps.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Scraping Instagram in 2021: avoiding 302 and 429 errors]]></title><description><![CDATA[<p>Instagram is a tough target for scraping. </p><p>For one of my side projects, I needed to get information from several public accounts, on a daily basis – for example, their followers counts, and their recent posts. I tried to use most popular Github scrapers like <code>https://github.com/realsirjoe/instagram-scraper</code> and</p>]]></description><link>https://pixeljets.com/blog/scraping-instagram-in-2021/</link><guid isPermaLink="false">600679d7d614490a1cb3942b</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Tue, 19 Jan 2021 07:04:25 GMT</pubDate><media:content url="https://pixeljets.com/blog/content/images/2021/01/IMG_20210112_114636.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://pixeljets.com/blog/content/images/2021/01/IMG_20210112_114636.jpg" alt="Scraping Instagram in 2021: avoiding 302 and 429 errors"><p>Instagram is a tough target for scraping. </p><p>For one of my side projects, I needed to get information from several public accounts, on a daily basis – for example, their followers counts, and their recent posts. I tried to use most popular Github scrapers like <code>https://github.com/realsirjoe/instagram-scraper</code> and <code>https://github.com/postaddictme/instagram-php-scraper</code> on DigitalOcean droplet, and it quickly turned out Instagram either redirects to /login location, or throws  <code>429 The maximum number of requests per hour has been exceeded</code> though it was the first request to its GraphQL endpoint. Apparently, all datacenter ip ranges have been banned by Instagram. Issues about 302 and 429 errors are created on Github issue queues almost every day so I definitely was not alone.</p><p>I did not want to log in into some fake Instagram account, because scraping via account will violate Instagram Terms and is not the most ethical thing to do. I also did not want &amp; need to do shady things like mass following or anything like that, and public accounts information is what I was interested in.</p><p>It turned out, there exists a solution to the problem – the unofficial Instagram API <a href="https://rapidapi.com/restyler/api/instagram40">https://rapidapi.com/restyler/api/instagram40</a> which uses residential proxy networks and smart retries to bypass Instagram restrictions. It helped me to build my project, and it still works good during more than 3 months, so it looks pretty stable to me. Around 3-4% of requests end with 5xx errors, but it's an explicit error that is instantly visible to my software – so I can just retry failed requests once in a while, and considering the situation with Instagram strict policy, and comparing to other solutions it's just perfect. Proxified PHP scraper (uses this RapidAPI provider under the hood) is available on Github: <a href="https://github.com/restyler/instagram-php-scraper">https://github.com/restyler/instagram-php-scraper</a> (it is a fork of <code>postaddictme/instagram-php-scraper</code> which was mentioned above)</p><h2 id="how-to-scrape-instagram-in-2021-step-by-step">How to scrape Instagram in 2021: step by step</h2><ol><li><strong>Sign up on RapidAPI</strong>. RapidAPI is a big marketplace where developers submit their APIs and I am really excited with this platform, since it embraces divide&amp;conquer approach: it allows app developers to focus on what their end customers need, delegating part of the work to other developer solutions. The best part about RapidAPI is that their API explorer allows you to subscribe&amp;test several APIs to see how they perform in real time, and quickly decide if specific API is good enough for your use case. It is especially easy for APIs which provide free plans.</li><li><strong>Subscribe to specific API on the RapidAPI marketplace.</strong> I recommend <a href="https://rapidapi.com/restyler/api/instagram40">https://rapidapi.com/restyler/api/instagram40</a> for Instagram API.</li><li><strong>Use the API. </strong>For this Instagram API, ready-made PHP solution is available on Github: <a href="https://github.com/restyler/instagram-php-scraper">https://github.com/restyler/instagram-php-scraper</a> , but of course you can also just implement API in your own code.</li></ol><p>Cheers!</p>]]></content:encoded></item><item><title><![CDATA[Frontend development in Docker is pain in 2020. But it gets better]]></title><description><![CDATA[<p>I've just started building a  dashboard for my new project, which is an opionated Node.js API gateway (still in its infancy), with Clickhouse for logging: <a href="https://github.com/restyler/api-gateway">https://github.com/restyler/api-gateway</a>. </p><p>Here is what I'm telling you: in case you forgot, frontend world is full of bloat. Transpilers, bundlers, and</p>]]></description><link>https://pixeljets.com/blog/frontend-docker-gets-better/</link><guid isPermaLink="false">5f95ab79d614490a1cb3907d</guid><category><![CDATA[esbuild]]></category><category><![CDATA[vuejs]]></category><category><![CDATA[vite]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sun, 25 Oct 2020 18:21:14 GMT</pubDate><media:content url="https://pixeljets.com/blog/content/images/2020/10/chart5-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://pixeljets.com/blog/content/images/2020/10/chart5-1.png" alt="Frontend development in Docker is pain in 2020. But it gets better"><p>I've just started building a  dashboard for my new project, which is an opionated Node.js API gateway (still in its infancy), with Clickhouse for logging: <a href="https://github.com/restyler/api-gateway">https://github.com/restyler/api-gateway</a>. </p><p>Here is what I'm telling you: in case you forgot, frontend world is full of bloat. Transpilers, bundlers, and compilers, paired with watchers, which recompile your project on save, and try to make hot reload work in browser, make the life of an average JS developer <em>full of  pain and misery</em>. Especially in Docker.</p><p>Here is a concrete list of Vue related projects, where I've encountered dev environment issues during recent 6 months (all using Macbook Pro, 15" and 16"):</p><h3 id="nuxt">Nuxt </h3><p><a href="https://nuxtjs.org/">https://nuxtjs.org/</a></p><p>Minimal hacking around starter app constantly made macbook fans screaming, and browsers constantly hot-reloading, with many comments on their github issue queue about the same problem.</p><h3 id="vuestic-dashboard">Vuestic Dashboard</h3><p><a href="https://github.com/epicmaxco/vuestic-admin">https://github.com/epicmaxco/vuestic-admin</a>  </p><p>I liked the design and the level of detail of this Vue dashboard, and decided to adapt it for my project. In Docker, on Macbook Pro 16", it <strong>starts for 2+ minutes in dev mode</strong>, with com.docker.hyperkit showing 400% CPU. Well, it even could not build the prod version of files at all, with 4GB RAM dedicated to Docker. It managed to build prod assets with 6GB RAM + "delegated" volume Docker setup, which I applied according to VS Code docs: <a href="https://code.visualstudio.com/docs/remote/containers-advanced#_update-the-mount-consistency-to-delegated-for-macos">https://code.visualstudio.com/docs/remote/containers-advanced#_update-the-mount-consistency-to-delegated-for-macos</a> </p><p>Saving any file in dev mode still took 10+ seconds to recompile.</p><h2 id="why">Why?</h2><p>The bloat of JS world is amplified with Docker development setup.</p><p>From what I understand, is that when you bind your host OS folder to Docker volume, and god forbid do some file save in your fancy JS project, bazillions of file events are generated with <a href="https://github.com/paulmillr/chokidar">https://github.com/paulmillr/chokidar</a> or similar library to do a recompile, and this avalanche of unoptimised shit keeps the hardware busy. This is not what happens on prod build though. There, it's just the compilers and bundlers which manage to make the macbook and its owner cry. </p><p>It may be manageable for a guy who works with single JS/TS project every day for months, and does not use Docker, and instead creates a mess of technologies on his host OS, but it may be very upsetting for a full stack guy with many projects who loves VS Code Containers, like me (I'll tell you about my VS code setup in the next post, stay tuned).</p><p>Yes, Docker has its issues on its own, but it definitely  looks like a <a href="https://mcfunley.com/choose-boring-technology">boring technology </a>to me, for recent 2-3 years, at least. This time I was firm that I wanted to use VS Code Containers and Docker for my development because it gives me so much value in terms of conveniency, flexibility, and bullet-proof reproducible environments.</p><h2 id="the-solution-esbuild">The solution: esbuild</h2><p><a href="https://github.com/evanw/esbuild">https://github.com/evanw/esbuild</a></p><p><strong>Disclaimer</strong>: you don't need to "know" or "learn" esbuild. It is just the technology used under the hood of Vite, Vue 3 dev bundler.</p><p>esbuild is <em>another</em> JavaScript bundler and minifer. Well, this time it is really worth it.</p><p>Just look at these graphs:</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2020/10/image.png" class="kg-image" alt="Frontend development in Docker is pain in 2020. But it gets better"></figure><!--kg-card-end: image--><p></p><p>How is it possible?</p><blockquote>It's written in Go, a language that compiles to native code</blockquote><blockquote>Parsing, printing, and source map generation are all fully parallelized</blockquote><blockquote>Everything is done in very few passes without expensive data transformations</blockquote><p>This looked too good to be true for an almost desperate developer like me, but it indeed turned out to be a solution. Vue 3 uses esbuild in their <a href="https://github.com/vitejs/vite ">Vite bundler</a> so I realized I need to urgently switch to Vue 3 and ESM for the sake of my mental health.</p><p>I've replaced vuestic with <a href="https://github.com/wobsoriano/v-dashboard">https://github.com/wobsoriano/v-dashboard</a> which uses Vue 3 and Tailwind. To do that, I had to learn (again? new tech every day!): </p><ul><li>Tailwind, </li><li>how ES modules work</li><li>Vue 3 Composition API with all its quirks </li><li>figure out where I can get ESM version of Axios and everything</li><li>start adapting Chart.js without <a href="https://vue-chartjs.org/">https://vue-chartjs.org/</a> because it is not ready for Vue 3</li></ul><p>but I was ready to do it after I saw how Vite managed to compile my new Dashboard.</p><h2 id="here-is-how-vite-works">Here is how Vite works</h2><p>Vite uses two things – ES modules and esbuild – to be super fast. I told you about esbuild above, so...</p><h3 id="es-modules">ES modules</h3><p>It is the "import" statements from our good old friend Typescript. Major news here, in case you missed it, is that you can now use them in browsers. Directly. Without Babel and such! Wohoo! </p><p><a href="https://kentcdodds.com/blog/super-simple-start-to-es-modules-in-the-browser">https://kentcdodds.com/blog/super-simple-start-to-es-modules-in-the-browser</a></p><p>This won't work for every visitor of your website, may be for another year, but it works fine in latest Chrome and Firefox, so it can be used for development right now.</p><p>So, Vite cuts corners in proper places and does not bundle JS files for dev builds.</p><p>I should mention that esbuild does not validate the correctness of Typescript during compilation, but your VS Code with lang server already does if for you, right?</p><h2 id="results">Results</h2><ul><li>I use .vue single file components and Typescript in development, like before.</li><li>Dev start is <em>instantaneous</em>. Cpu load of Docker is zero. Hot reloads are instantaneous.</li><li>Production build with axios, chart.js, pretty heavy toast library, and all the neccessary things like simple state store management <em>takes 20 seconds</em>. What a drastic change compared to vuestic!</li></ul><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2020/10/image-1.png" class="kg-image" alt="Frontend development in Docker is pain in 2020. But it gets better"></figure><!--kg-card-end: image--><p>That's all. My day-to-day frontend development is bearable now. I recommend to try Vite in your new projects (and ES modules + esbuild if you are more into React and other frameworks) - it's not perfect, and still in beta, but developer experience is day&amp;night.</p>]]></content:encoded></item><item><title><![CDATA[Clickhouse as a replacement for ELK, Big Query and TimescaleDB]]></title><description><![CDATA[<p><strong>UPD 2020: Clickhouse is getting stronger with each release. We are using Clickhouse as an ELK replacement in our <a href="https://apiroad.net">ApiRoad.net</a> project - API marketplace with ultimate observability and analytics of HTTP requests.</strong></p><p><a href="https://clickhouse.yandex/">Clickhouse</a> is an open source <a href="https://en.wikipedia.org/wiki/Column-oriented_DBMS">column-oriented database management system</a> built by Yandex. Clickhouse is used by Yandex,</p>]]></description><link>https://pixeljets.com/blog/clickhouse-as-a-replacement-for-elk-big-query-and-timescaledb/</link><guid isPermaLink="false">5cb48194106efa4dc9d36ccb</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Wed, 21 Nov 2018 19:04:23 GMT</pubDate><content:encoded><![CDATA[<p><strong>UPD 2020: Clickhouse is getting stronger with each release. We are using Clickhouse as an ELK replacement in our <a href="https://apiroad.net">ApiRoad.net</a> project - API marketplace with ultimate observability and analytics of HTTP requests.</strong></p><p><a href="https://clickhouse.yandex/">Clickhouse</a> is an open source <a href="https://en.wikipedia.org/wiki/Column-oriented_DBMS">column-oriented database management system</a> built by Yandex. Clickhouse is used by Yandex, CloudFlare, VK.com, Badoo and other teams across the world, for really big amounts of data (thousands of row inserts per second, petabytes of data stored on disk).</p><p><a href="https://qwintry.com">Qwintry</a> started using Clickhouse in 2018 for reporting needs, and it deeply impressed us by its simplicity, scalability, SQL support, and speed. It is so fast that it looks like magic.</p><h4 id="simplicity">Simplicity</h4><p>Clickhouse is installed by 1 command in Ubuntu. <br>If you know SQL - you can start using Clickhouse in no time. It does not mean you can do "show create table" in MySQL and copy-paste SQL to Clickhouse, though. <br>There are substantial data type differences on table schema definitions, comparing to MySQL, so it will require some time to alter table definitions and explore table engines to feel comfortable.</p><p>Clickhouse works great without any additional software, but ZooKeeper needs to be installed if you want to use replication.</p><p>Analyzing the performance of queries feels good - system tables contain all the information and all the data can be retrieved via old and boring SQL.</p><h4 id="performance">Performance</h4><p><a href="https://clickhouse.yandex/benchmark.html#[%22100000000%22,[%22ClickHouse%22,%22Vertica%22,%22MySQL%22],[%220%22,%221%22]]">Benchmark against Vertica and MySQL</a></p><p><a href="https://blog.cloudflare.com/how-cloudflare-analyzes-1m-dns-queries-per-second/">Cloudflare post about Clickhouse</a></p><p><a href="https://www.altinity.com/blog/2017/6/20/clickhouse-vs-redshift">Benchmark against Amazon RedShift</a> <a href="https://www.altinity.com/blog/2017/7/3/clickhouse-vs-redshift-2">[2]</a></p><h4 id="maturity">Maturity</h4><p>Clickhouse development happens on <a href="https://github.com/yandex/ClickHouse/pulse">Github repo</a>, at an impressive pace.</p><h4 id="popularity">Popularity</h4><p>Clickhouse popularity seems to grow exponentially, especially in the Russian-speaking community. Recent Highload 2018 conference (Moscow, 8-9 nov 2018) showed that such monsters as vk.com and Badoo are using Clickhouse in production and inserting data (e.g. logs) from tens of thousands of servers simultaneously ( <a href="https://www.youtube.com/watch?v=pbbcMcrQoXw">https://www.youtube.com/watch?v=pbbcMcrQoXw</a> - <em>sorry, Russian language only</em>).</p><h2 id="use-cases">Use cases</h2><p>After I've spent some time on research I think there is a number of niches where Clickhouse can be useful and may even replace other, more traditional and popular solutions:</p><h4 id="augment-mysql-and-postgresql">Augment MySQL and PostgreSQL</h4><p>We've just replaced (partially) MySQL with ClickHouse for Mautic newsletter platform ( <a href="https://www.mautic.org/">https://www.mautic.org/</a> ) which due to (questionable) design choices logs each email it sends, and <em>every link in this email with big base64 hash</em> to the huge MySQL table (<code>email_stats</code>). After sending mere 10 million emails to our subscribers this table easily takes 150GB of file space and MySQL starts to feel bad on simple queries. To fix the file space issue we successfully used InnoDB table compression which made the table 4 times smaller, but it still does not make a lot of sense to store more than 20-30 millions of emails in MySQL just for read-only historical information, since any simple query that for some reason has to do full scan leads to swapping and big I/O load - so we were getting Zabbix alerts all the time.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2018/10/mautic_fsize.png" class="kg-image" alt="Example of Clickhouse compression"></figure><!--kg-card-end: image--><p>Clickhouse uses two compression algorithms, <a href="https://www.altinity.com/blog/2017/11/21/compression-in-clickhouse">and typically the compression is closer to 3-4 times</a> but is this specific case the data was very compressible.</p><h4 id="replace-elk">Replace ELK</h4><p>From my experience, ELK stack (Elasticsearch in particular) takes far more resources to run than it should when we talk about log storage purposes. Elasticsearch is a great engine if you need good full-text search (do you need full text search on your logs? I don't), but I wonder why it became de-facto standard for logging purposes - its ingestion performance, combined with Logstash, gave us troubles even on pretty small loads, and required adding more and more RAM and disk space. Clickhouse as a DB layer is better than ElasticSearch:</p><ul><li>SQL dialect support</li><li>Much better compression of stored data</li><li>Regex search support instead of full text</li><li>Better query plans and overall performance</li></ul><p>The biggest issue that I see now with Clickhouse (vs ELK) is the lack of log shipping solutions and documentation/tutorials on this topic (everyone can setup ELK by Digital Ocean manual, which is a huge thing for rapid technology adoption). So, the DB engine is here, but there is no Filebeat for Clickhouse yet. Yes, there are fluentd and <a href="https://github.com/flant/loghouse">loghouse</a>, and there is <a href="https://github.com/Altinity/clicktail">https://github.com/Altinity/clicktail</a>, but more time is required so the simple and best way takes the strong lead so newcomers just install and use it in 10 minutes. <br>Since I like minimalistic solutions I tried to use FluentBit (which is a log shipper with very small memory footprint) with Clickhouse (and I tried to avoid using Kafka in between) but small incompatibilities like <a href="https://github.com/fluent/fluent-bit/issues/848">date format issues</a> still need to be figured out before this can be done without proxy layer which converts data from Fluentbit to Clickhouse.</p><p>Talking about Kibana alternative - there is <a href="https://github.com/Vertamedia/clickhouse-grafana">Grafana which can be used with Clickhouse as backend</a>. As far as I understand there may be issues with performance when rendering huge amount of data points, especially with older Grafana versions - at Qwintry we haven't tried it yet, but complaints about this appear from time to time in Clickhouse telegram support channel.</p><h4 id="replace-google-big-query-and-amazon-redshift-for-bigger-companies-">Replace Google Big Query and Amazon RedShift (for bigger companies)</h4><p>A perfect use case for Big Query is: upload 1TB of JSON data and run analytical queries on it. Big Query is a great product and it's hard to overestimate its scalability. It is a lot more sophisticated piece of software then Clickhouse running on an in-house cluster, but from the customer point of view, it has a lot of similarities with Clickhouse. Big Query can quickly get pricey since you pay for each SELECT, and it is a SaaS solution with all its pros and cons. <br>Clickhouse is a great fit if you run a lot of computationally expensive queries. The more SELECT queries you run every day - the more sense it has to replace Big Query with Clickhouse - it may literally save you thousands of dollars if we talk about many terabytes of <em>processed</em> data (not <em>stored</em> data, which is pretty cheap in Big Query).</p><p>Altinity summed it up well in "Cost of Ownership" section in <a href="https://www.altinity.com/blog/2017/10/23/migration-to-clickhouse">their article</a>.</p><h4 id="replace-timescaledb">Replace TimescaleDB</h4><p>TimescaleDB is a PostgreSQL extension, specializing in timeseries data. <br><a href="https://docs.timescale.com/v1.0/introduction">https://docs.timescale.com/v1.0/introduction</a></p><p>Clickhouse did not really start to seriously compete in the time-series niche, but due to its columnar nature and <a href="https://clickhouse.yandex/docs/en/development/architecture/">vectorized query execution</a> it is faster than TimescaleDB in most of the analytical queries, batch data ingestion performance is ~3x better  and Clickhouse <strong>uses 20 times less disk space</strong> which is really important for big amounts of historical data: <br><a href="https://www.altinity.com/blog/clickhouse-for-time-series">https://www.altinity.com/blog/clickhouse-for-time-series</a></p><p>The only way to save some disk space when using TimescaleDB is to use ZFS or similar file system.</p><p>Upcoming updates to Clickhouse will most likely introduce <a href="https://github.com/yandex/ClickHouse/issues/838">delta compression</a> which will make it even better fit for time-series data.</p><p>TimescaleDB may be a better choice than (bare) Clickhouse for:</p><ul><li>small installations with very low amount of RAM (&lt;3 GB),</li><li>big amount of frequent small INSERTs which you don't want to buffer into bigger chunks.</li><li>better consistency and ACID</li><li>PostGIS support</li><li>tight mix and easy joins with existing PostgreSQL tables (since essentially TimescaleDB is PostgreSQL)</li></ul><h4 id="compete-with-hadoop-and-mapreduce-systems">Compete with Hadoop and MapReduce systems</h4><p>Hadoop and other MapReduce products can do a lot of complex calculations, but these tend to have huge latencies, and Clickhouse fixes it - it processes terabytes of data and gives results almost instantly. So for rapid, interactive analytical research Clickhouse can be a lot more interesting for data engineers.</p><h4 id="compete-with-pinot-and-druid">Compete with Pinot and Druid</h4><p>Closest contenders (column-oriented, linearly scalable, open source) are Pinot and Druid, there is a wonderful write-up with comparison here: <br><a href="https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7">https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7</a> <br>The article is a bit updated: it states that Clickhouse does not support UPDATE and DELETE operations, <a href="https://www.altinity.com/blog/2018/10/16/updates-in-clickhouse">which is not exactly the case for latest versions</a>.</p><p>We do not have any experience with these, but <br>I don't like the complexity of infrastructure that is required to run Druid and Pinot  - big amount of moving parts, and Java everywhere.</p><p>Druid and Pinot are Apache incubator projects and their GitHub pulse pages show good development pace. BTW, <a href="http://incubator.apache.org/projects/">Pinot went for Apache Incubator just a month ago</a> - in October of 2018. Druid went for Apache Incubator just 8 months earlier on 2018-02-28.</p><p>This is really interesting and raises some (stupid?) questions because I have very little information on how ASF works. <br>Did Pinot authors notice that Apache Foundation is giving a lot of acceleration to Druid and felt a bit envious? :) <br>Will it somewhat slow down Druid and accelerate Pinot development (of course if there is a phenomenon of volunteer contributors which were committing to Druid but suddenly got interested in Pinot)?</p><h4 id="clickhouse-cons">Clickhouse CONS</h4><h6 id="immaturity">Immaturity</h6><p>This is obviously not a <a href="http://mcfunley.com/choose-boring-technology">boring technology</a> yet (but there are no such thing in columnar DBMS, anyways)</p><h6 id="small-inserts-at-high-rate-perform-poorly">Small inserts at high rate perform poorly</h6><p>Inserts need to be batched into bigger chunks, the performance of small inserts degrades proportionally to a number of columns in each row. This is just how data is stored on disk in Clickhouse - each column means 1 file or more, so to do 1-row insert containing 100 columns at least 100 files needs to be opened and written to. That's why some mediator is required to buffer inserts (unless the client can't buffer it) - usually, Kafka or some queue system. Or, Buffer table engine can be used and data can be copied to MergeTree tables later in bigger chunks.</p><h6 id="joins-are-restricted-by-server-ram">Joins are restricted by server RAM</h6><p>Well, at least there <em>are</em> joins! E.g. Druid and Pinot do not have joins at all - since they are hard to implement right in distributed systems, which do not support moving big chunks of data between nodes.</p><h3 id="conclusion">Conclusion</h3><p>At Qwintry, we plan to use Clickhouse extensively in the upcoming years, because it hits a great balance of performance, low overhead, scalability, and simplicity. I am pretty sure its adoption will increase rapidly as soon as Clickhouse community generates more recipes on how it can be used at small and medium-sized installations.</p>]]></content:encoded></item><item><title><![CDATA[Good thing in PHP nobody talks about]]></title><description><![CDATA[<p>Do you know what I like about PHP?</p><p>It is designed to die after execution.</p><p>You should not fear code duplication, or bad formatting of code, of bad variables naming - mentioned artefacts of code smell are bad, but are nowhere close to statefulness in terms of contaminating your code.</p>]]></description><link>https://pixeljets.com/blog/good-thing-in-php-nobody-talks-about/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cca</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Mon, 27 Aug 2018 19:04:41 GMT</pubDate><content:encoded><![CDATA[<p>Do you know what I like about PHP?</p><p>It is designed to die after execution.</p><p>You should not fear code duplication, or bad formatting of code, of bad variables naming - mentioned artefacts of code smell are bad, but are nowhere close to statefulness in terms of contaminating your code.</p><p>One of the biggest restrictions of PHP, <a href="https://software-gunslinger.tumblr.com/post/47131406821/php-is-meant-to-die">hated by a lot of people</a>, and at the same time the single and the only huge factor which make poor PHP code written by junior developers manageable is this. State in most cases is completely separated from PHP process and is stored in (server-side) database and even in shittiest code samples the database is something you can rely on as on the source of truth, so reproducing the bugs is (often) easy, and gradual code rewriting (old code and new code using the same database - state) is possible. And the deployment is so easy - you just upload your script! And the development, oh, you just edit this line, save the file... and it works! Think about it.</p><ul><li>Stateless (this is trendy now, right?)</li><li>Easy deployment</li><li>Easy development</li><li>Easy debugging</li></ul><p>All of this happiness is just there out of the box, because <em>it dies every time</em>.</p><p>Talking about this language design choice - I am even not sure this was intentional (UPD: <a href="https://news.ycombinator.com/item?id=17853755">okay, Rasmus Lerdorf confirmed it was intentional on HN discussion of this article</a>), this might be a side effect. But this <em>restriction</em>, I think, is one of the reasons why PHP, historically full of bad code and strange architectural decisions, gained its popularity, and is still alive as a technology.</p><p>This simple concept of strict state separation was alien in JS world because it was not there by design, this is why most frontend frameworks of the first wave (like Angular 1) (and complex jQuery code, of course) were such a pain in the ass before Redux and other solutions were invented, which separated state from interface and finally made it manageable. In JS, you <em>can</em> separate state. In PHP, you <em>have to</em>.</p><p>Every time I see Node.js code of sample websocket chat which is a long-living process I can't help myself and shudder because of this global "subscribers" variable which holds all the open websocket connections - I just see how this async long-living complexity explodes in hands of young and fast-typing developer.</p><p>This is why Ajax, when it was introduced, while giving so much good stuff to end users, made the life of PHP developers so complex - state suddenly appeared on the frontend and we could not even imagine where this will lead us - jQuery, then Backbone, then Angular, then React, and here we go - now we have the frontend developer profession - separate people dealing with state and stateful interfaces there on the front.</p><p>This is why you should be careful when working with queues and daemons (and you will still need them in any mid-sized PHP project). They do not forgive, they are hard to debug and the memory leaks and OS kills them once in a while when your bad code is launched in production. Regular PHP script forgives you, please remember this and appreciate it the next time when you wish your old PHP framework to be better in async code and long-living threads support, or when you praise Docker for being stateless - PHP is stateless, too :)</p><p>UPD: <br>HackerNews discussion: <a href="https://news.ycombinator.com/item?id=17853755">https://news.ycombinator.com/item?id=17853755</a></p>]]></content:encoded></item><item><title><![CDATA[Vue.js vs React in 2017: state of art]]></title><description><![CDATA[<p>One year ago I've published a post about reasons why our team chose Vue.js over React for our qwintry.com project rewrite. I've made some predictions back then:</p><p>I expect Vue to become a primary JS framework in 16-24     months if Evan You makes right steps, at least around</p>]]></description><link>https://pixeljets.com/blog/vue-js-vs-react-what-to-expect-in-2018/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc8</guid><category><![CDATA[vuejs]]></category><category><![CDATA[react]]></category><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sun, 24 Dec 2017 14:34:47 GMT</pubDate><content:encoded><![CDATA[<p>One year ago I've published a post about reasons why our team chose Vue.js over React for our qwintry.com project rewrite. I've made some predictions back then:</p><p>I expect Vue to become a primary JS framework in 16-24     months if Evan You makes right steps, at least around     backenders and smaller teams of frontenders. I still consider React stack to be the primary JS framework of 2017, especially if React Native manages to mature and improve itself with the same pace it used to.</p><p><a href="http://pixeljets.com/blog/why-we-chose-vuejs-over-react/">me, 10 dec 2016</a></p><p>Since guys from stateofjs.com recently published its 2017 results, there is some material for analysis and thoughts here.</p><h2 id="stateofjs-2017-frontend-framework-results">Stateofjs 2017 Frontend Framework results</h2><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/stateofjs_2017_front.png" class="kg-image"></figure><!--kg-card-end: image--><p><a href="https://stateofjs.com/2017/front-end/results">https://stateofjs.com/2017/front-end/results</a></p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/stateofjs_2016_front.png" class="kg-image"></figure><!--kg-card-end: image--><p><a href="http://2016.stateofjs.com/2016/frontend/">http://2016.stateofjs.com/2016/frontend/</a></p><p>So, 1 year passed, and Vue.js is clearly the leader in "would like to learn" by a huge margin which makes me think the next year would be the year of Vue.js success, with React being stable in its growth, while Angular won't be able to keep up with these two rivals. Compare it with 2016 when Vue.js was clearly a dark horse and "another JS framework", Angular was a second choice for "serious guys" and React was a leader.</p><p>But.. Vue.js will be dominating only in the web, definitely not in overall frontend world. <br>React is becoming the technology that will rule the frontend world.</p><p>Why?</p><h2 id="synergy-and-satellite-products">Synergy and satellite products</h2><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/stateofjs_2017_mobile.png" class="kg-image" alt="Stateofjs 2017 mobile frameworks results"></figure><!--kg-card-end: image--><p><a href="https://stateofjs.com/2017/mobile/results/">https://stateofjs.com/2017/mobile/results/</a></p><p>Synergy, my friends, is the key to React upcoming monopoly.</p><h4 id="react-native">React Native</h4><p>Vue.js failed to provide viable alternative to React Native (Weex and Quasar are too young, fragmented and weak), and React Native + React.js is exploding because if you (as an average 2018 developer) master React and Redux for the web, you will get a huge profit right away: if you want, you will be productive in mobile world in a matter of weeks with React Native which clearly demonstrated its advantages over Cordova and other hybrid approaches – just look at Adidas Glitch App, and Skype for Android and iOS, to feel the abyss between React Native and competing hybrid technologies:</p><p><a href="https://medium.com/possible-cee/how-we-have-been-breaking-patterns-with-the-adidas-glitch-d734340fd40e">https://medium.com/possible-cee/how-we-have-been-breaking-patterns-with-the-adidas-glitch-d734340fd40e</a></p><p><a href="https://mspoweruser.com/skype-is-testing-a-new-android-app-with-a-new-design-reaction-feature-and-bing-integration/">https://mspoweruser.com/skype-is-testing-a-new-android-app-with-a-new-design-reaction-feature-and-bing-integration/</a></p><p>Modern development is about mobile, not just about the web.</p><p>React Native is a success, and it will drag React.js to the sky.</p><p>We (as a Qwintry team) are preparing big releases of our apps for iOS and Android, scheduled for the first quarter of 2018, for our new website rewritten from scratch (codename Q3, powered by Vue.js and Yii2), and our new apps are powered by React Native. <br>When we were considering React Native 1.5-2 years ago for our previous versions of apps, our Swift developer conclusion was "definitely no", that was "boring" but logical decision – it was a great idea to wait for technology to mature. We ended up having Swift app for iOS and Java app for Android back then, and I am pretty sure we avoided a lot of pain.</p><p><a href="https://itunes.apple.com/ru/app/%D0%B1%D0%B0%D0%BD%D0%B4%D0%B5%D1%80%D0%BE%D0%BB%D1%8C%D0%BA%D0%B0-%D0%B4%D0%BE%D1%81%D1%82%D0%B0%D0%B2%D0%BA%D0%B0-%D0%B8%D0%B7-%D0%BC%D0%B0%D0%B3%D0%B0%D0%B7%D0%B8%D0%BD%D0%BE%D0%B2-%D1%81%D1%88%D0%B0/id1015885334?mt=8">Link to our iOS app which is now written in Swift but will get React Native release in a matter of weeks</a></p><p>Now our Swift developer is writing JS code in React Native and admits it's pretty good and it is the right moment to jump to new stack because advantages of React Native in a lot of cases are now bigger than disadvantages.</p><p>Our frontend guys who are writing Vue.js code for web became productive in React Native in a matter of weeks, but I think this process would be even less painful and our stack would be simpler if we chose React.js for the web. We definitely do not regret choosing Vue.js for web, <a href="http://pixeljets.com/blog/why-we-chose-vuejs-over-react/">read more in my previous post why we did that</a>, my expectations on Vue.js web domination are becoming the reality, but I still expect a lot of other small and mid-sized teams in 2018 to choose React stack both for web and mobile because of the synergy – it is just an obvious choice now when React Native is so good. Managing separate native mobile development process with Java and Swift/Objective C is still great and safe choice, but it might get expensive from the business point of view, and going through app store approvals still sucks compared to the magic of <a href="https://github.com/Microsoft/react-native-code-push">React Native Codepush</a>.</p><h4 id="graphql">GraphQL</h4><p>GraphQL is another product of Facebook which adds value to React ecosystem. <br>It's not mature yet, it is complex, it has its disadvantages - but it looks like GraphQL is the future replacement for REST.</p><p>GraphQL is another sign that innovation in frontend world mostly happens in React universe, and then these innovations are adopted&amp;improved in other frameworks.</p><h2 id="more-info">More info</h2><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/download--1-.svg" class="kg-image"></figure><!--kg-card-end: image--><p>Angular is definitely looking "better" here but I don't think this is an indicator that Angular as a framework will be successful in a long-term period – to my mind it is just an indicator that migration from Angular 1 to Angular2+ is an obvious choice for a regular developer who maintains legacy Angular 1 code, but when he tries to migrate and sees the number of changes between old and new versions of Angular – it leads to frustration and this fact explains big number of questions on SO.</p><p><a href="https://insights.stackoverflow.com/trends?tags=angularjs%2Cangular%2Creactjs%2Cvue.js%2Cember.js">https://insights.stackoverflow.com/trends?tags=angularjs%2Cangular%2Creactjs%2Cvue.js%2Cember.js</a></p><h2 id="could-it-be-different">Could it be different?</h2><p>There was a moment in 2017 when Vue.js could win the "war" with React in terms of gaining developer traction, even with the absence of proper mobile solution in its stack. <br>I am talking about the situation with React licensing: <br><a href="https://ma.tt/2017/09/on-react-and-wordpress/">https://ma.tt/2017/09/on-react-and-wordpress/</a> <a href="https://news.ycombinator.com/item?id=15253781">(HackerNews discussion)</a></p><p>Wordpress was considering to ditch React as the frontend solution for its <a href="https://ithemes.com/2017/11/09/gutenberg-wordpress-editor-10-things-to-know/">layout builder</a>, and it could result in Vue.js to be chosen for Wordpress ecosystem pretty much like it was chosen for Laravel.</p><p>The intrigue was killed with Facebook just-in-time wise decision to fix the React license: <br><a href="https://ma.tt/2017/09/facebook-dropping-patent-clause/">https://ma.tt/2017/09/facebook-dropping-patent-clause/</a></p><p>Congratulations, Facebook and React team, you did a good job this year, and we will be glad to use React products in our stack, and recommend both Vue.js and React for other teams depending on their situation.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/DJI_0057.JPG" class="kg-image"></figure><!--kg-card-end: image--><p><em>New Qwintry warehouse in New Castle, Delaware, right after Black Friday 2017 customer purchase volumes hit us hard. Every package here is managed, received, and picked via our Swift and React Native powered operator apps, and customers create their orders via Swift/Java (soon React Native) customer apps.</em></p><p>UPD: This post made its way to Hacker News frontpage and <a href="https://news.ycombinator.com/item?id=15999688">there is a useful discussion with 100+ comments there</a>.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2017/12/Screenshot-at-Dec-25-14-31-30.png" class="kg-image"></figure><!--kg-card-end: image-->]]></content:encoded></item><item><title><![CDATA[Why we chose Vue.js over React]]></title><description><![CDATA[Why we chose Vue.js over React]]></description><link>https://pixeljets.com/blog/why-we-chose-vuejs-over-react/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc6</guid><category><![CDATA[vuejs]]></category><category><![CDATA[react]]></category><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sat, 10 Dec 2016 21:13:33 GMT</pubDate><content:encoded><![CDATA[<p>Qwintry team recently started active migration to Vue.js as a frontend framework in all our legacy and new projects:</p><ul><li>in legacy Drupal system (qwintry.com)</li><li>in our new, completely rewritten qwintry.com branch</li><li>in Yii2-powered b2b system (logistics.qwintry.com)</li><li>in all our smaller internal and external projects (mostly with PHP and Node.js backends)</li></ul><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2019/04/why-we-chose-vuejs-over-react-qwintry_box.jpg" class="kg-image" alt="Qwintry box"></figure><!--kg-card-end: image--><p><br><i>Our package at customer door - from our <a href="https://qwintry.com/en/users-reviews">happy customer reviews</a></i></p><p>We have pretty big codebase, mostly PHP&amp;JS.</p><p>We decided to use Vue.js after we've completed evaluation of modern frameworks: we've built our <a href="https://qwintry.com/en/calculator">customer calculator</a> on React, Vue.js and Angular2.</p><h2 id="my-thoughts-on-react-js">My thoughts on React.js</h2><p>React skyrocketed the JS world and it is now probably the default choice for JS devs, when we talk about choosing frontend view framework. <br>I've built some SPAs and dynamic widgets on React, I've played around React Native (under iOS) and Redux as well. I think that React was a great step forward for JS world in terms of <a href="https://en.wikipedia.org/wiki/Single_source_of_truth">state-awareness</a>, and it showed lots of people the real functional programming in a good, practical way. I think React Native is huge - it just changes the landscape of native development.</p><p>Cons of React for me are:</p><h3 id="purity-immutability-and-ideology-over-getting-things-done">Purity, immutability and ideology over getting things done</h3><p>Don't get me wrong. I appreciate pure functions and simplistic render() approach - no doubt, that's a great idea which is working great in real life. I am talking about other things. <br>I guess <a href="https://facebook.github.io/react/blog/2016/07/13/mixins-considered-harmful.html">this</a> level of strictness and purity is something that may be useful when you have 1000 devs in your company - just about the time when you decide to develop your own syntax to go for <a href="http://hacklang.org/">static types in all the PHP code you write</a>. Or when you are a Haskell developer coming to JS world. But most of companies have far smaller dev teams and other goals than Facebook. I will elaborate more on this below.</p><h3 id="jsx-sucks">JSX sucks</h3><p>I know, I know! It is <em>"just a plain javascript with special syntax"</em>. Our design&amp;html guys who need to focus on making this specific form beautiful by wrapping its elements in various quantities of divs - right now - they don't give a sh*t about purity and plain ES6. Applying designs to React components still sucks big time because JSX lacks readability. Not being able to put plain old IF condition to some block of HTML code sucks, please don't believe React fans that keep telling you that you don't need it when you have ternary operators.  Let me assure you - this is still a mess of HTML and JS when you edit it and read it, even though it gets compiled to pure JS.</p><!--kg-card-begin: code--><pre><code>&lt;ul&gt;  
       {items.map(item =&gt;
         &lt;li key={item.id}&gt;{item.name}&lt;/li&gt;
       )}
&lt;/ul&gt;  
</code></pre><!--kg-card-end: code--><p>Lots of developers (including me - but I am not there anymore) are thinking that this specific restriction on syntax will make you stronger will help you write more modular code because you have to put your chunks of code to smaller helper functions and use them inside your render() function like this guy suggested: <br><a href="http://stackoverflow.com/a/38231866/1132016">http://stackoverflow.com/a/38231866/1132016</a></p><p>JSX is also the reason when you have to keep splitting your 15-lines-of-html-code component to 3 components, 5-lines-of-code-in-each.</p><p>Don't think that this is a great workaround which makes you a better developer because you now have to structure your code like this.</p><p>Here is the thing: <br>When you write a relatively complex component - which you are probably not going to put to public github repo tomorrow to showcase it on hackernews - this approach of splitting components into super-dumb components because of JSX restrictions will always put you out of flow when you are solving real business task. No, I am not saying that the idea of smaller components is bad or not working. <br>You should clearly realize that you need to split your code into components to keep you codebase manageable and reusable. But you should do it only when you think that this specific logical entity in your code should be a separate component with own props, -  and <b>not</b> on every two-three IFs that you write via ternary operator! Every time you create a new component here and there it costs you and <a href="https://psygrammer.com/2011/02/10/the-flow-programming-in-ecstasy/">your flow</a> a penny (probably more) because you need to switch from business-task thinking (when you already remember current component state model, and you just need to add some html here and there to make it running) to "manager thinking" - you go create separate file for your component, start thinking about props of this new component, and how they map to state, and how you are going to pass callbacks inside, etc, etc. <br>As a result you get reduced speed of writing code by being forced to leverage excessive and premature (potential) modularity of components in places where you don't really need it. In my opinion, premature modularity is very similar to premature optimization.</p><p>For me and my team the readability of code is important, but it is still very important that writing code is fun. It is not funny to create 6 components when you are implementing really simple calculator widget. In a lot of cases, it is also bad in terms of maintenance, modifications, or applying visual overhaul to some widget, because you need to jump around multiple files/functions and check each small chunk of HTML separately. Again, I am not suggesting to write monoliths - I suggest to use components instead of microcomponents for day-to-day development. It is just about the common sense.</p><h3 id="working-with-forms-and-redux-in-react-will-make-you-type-all-day-long">Working with forms and Redux in React will make you type all day long</h3><p>React is about pureness and clean one way flow, remember? That's why <a href="https://facebook.github.io/react/docs/two-way-binding-helpers.html">LinkedStateMixin became a <em>persona non grata</em></a> and now you have to create 10 functions to get input from 10 inputs. 80% of these functions will contain single line with this.setState() call, or redux action call (then you will probably have to create another 10 constants - one per each input). I guess that would be acceptable if you could generate all this code by thinking about it.. but I am not aware of any IDE that could significantly improve this. <br>Why do you have to type so much? Because two-way binding is considered dangerous by big guys in big-enterprise apps. I can confirm that two-way data flow code sometimes is not as clean to read, but most of these fears are mixed with overall pain about Angular 1 where two-way binding was bad, and still.. probably it was not the biggest fail even there.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2019/04/why-we-chose-vuejs-over-react-282--1-.gif" class="kg-image"></figure><!--kg-card-end: image--><p><br>I can't share the code for obvious reasons but writing it in Vue was real fun, and the code is very readable.</p><p>And I know for sure that creating a separate function for each input to handle a widget like this in React would certainly not make me happy.</p><p>Redux sounds like a <a href="https://forums.meteor.com/t/redux-boilerplate-reduction-laziness-or-not-switching-to-mobx/25568">synonym</a> of verbosity, as well. And it's easy to find developers that blame that Mobx is turning React into Angular just because because it leverages two-way binding - see my point #1 about purity. It looks like a lot of smart people value purity of their codebase more than getting job done (which is fine if you don't have deadlines, I guess).</p><h3 id="excessive-tooling">Excessive tooling</h3><p>React was created with Babel in mind. You can't do a step in real-world React app without a bunch of npm packages here, compiler to ES5 going first. Simple app based on <a>official react starting package</a> code has around 75MB of JS code in node_modules. <br> It is not a critical thing, it's more related to JS world in overall than to React, but it adds up to overall frustration when using React as well.</p><h2 id="angular-1-too-much-freedom-is-sometimes-bad">Angular 1: too much freedom is sometimes bad</h2><p>Angular 1 was a great frontend framework that is located in the opposite corner (from React) of the imaginary JS map of purity and readability of codebases - it allows you to start quickly, it gives you some real fun for your first 1k lines of code, and then it  practically forces you to write shitty code. You will probably get lost in directives, scopes, and two way data flows across all the layers of your app will just be a cherry on the pie of the code that your freshly hired devs won't even want to touch because it won't be manageable.</p><p>Why so? <br>Angular.js was created in 2009 when frontend world looked pretty simple and nobody was even thinking about state hygiene. You can't blame these guys - they were just creating competitor of Backbone with some new concepts and they wanted to do less typing.</p><h2 id="angular2">Angular2</h2><p>Just build hello world app and look at the amount of files you got in your repo. You will have to use Typescript (and I am not 100% sure it is something I am going to enjoy doing every day - <a href="https://medium.com/javascript-scene/angular-2-vs-react-the-ultimate-dance-off-60e7dfbc379c#a431">great writeup from Eric Eliott on this topic</a>) and compilers to start working. It was enough for me.. For me is still too much typing before I start real work. To my mind, Angular 2 guys are trying to build perfect framework which will beat React, instead of trying to build a framework which solves business tasks for average user. May be I am wrong and my mind might change - I don't have a lot of experience building Angular2 apps yet, we've just built demo customer calculator app for our in-house evaluation. Wonderful <a href="https://vuejs.org/v2/guide/comparison.html">comparison page on Vue.js website</a> states that Angular2 is a good framework which share a lot of concepts with Vue.</p><h2 id="vue-js">Vue.js</h2><p>In short, Vue.js is a thing that I've been waiting for a long time ( I will be talking about Vue.js 2 which got quite a few improvements over first version of Vue and this is the current stable framework version). For me, in terms of elegance and conciseness, and focus on getting things done, Vue.js is the biggest change to JS after the day when I was blown out by jQuery in 2007. <br>If you look to Vue.js popularity graphs you will notice it is not just me: <a href="https://www.google.ru/trends/explore?q=vue.js,react.js,angular.js">https://www.google.ru/trends/explore?q=vue.js,react.js,angular.js</a> <br>Vue.js is one of the most rapidly growing JS frameworks in 2016, and I think it's not just another hype based on fans that switch to newer JS framework every 3 months, or authority (and money) of one big company.</p><p>Laravel added Vue.js to core, which is a big thing.</p><h3 id="pros-of-vue-js">Pros of Vue.js</h3><p>Vue.js hits a sweet spot between readability&amp;maintainability  and fun. A spot between React and Angular 1, and if you look at Vue guideline, you will instantly notice how many nice things it got from these frameworks. <br>From React, it got component-based approach, props, one-way data flow for components hierarchy, performance, virtual rendering ability, and understanding of importance of proper state management of apps. <br>From Angular, it got similar templates with good syntax, and two-way binding when you need it (inside single component).</p><p>Vue.js is very easy to start - I've seen this in our team. It does not enforce any compilers by default, so it's really easy to drop in Vue to your legacy codebase and start improving your jQuery mess with good JS code.</p><h3 id="right-amount-of-magic">Right amount of Magic</h3><p>Vue.js is very easy to work with, both in HTML and JS - you can do pretty complex templates without losing your focus on business task and the template usually maintains great readability even when it gets really big - at this moment you've usually made a good progress in terms of business task solved, and you might want to refactor templates and split them into smaller components - at this moment you see the whole "picture" of your app a lot better than at the moment when you've started.</p><p>From my experience, this differs vastly from approach that I used to have in React: I saved myself a lot of hours here. In React, you <em>have</em> to split components into micro-components and micro-functions at the time of writing the initial version of your code - or you will literally get buried in the mess of your code. In React you will spend a lot of time polishing the props and refactoring your super-small components (that will never be re-used later) again and again since you don't see clearly if you have to change the flow of your app logic somewhere in the middle of the code writing process.</p><p>Working with html forms is a breeze in Vue. This is where two-way binding shines. It does not bring any issues to me even in complex cases, though watchers may remind of Angular 1 at first glance. One-way flow with callback passing is always at your service when you do your components splitting.</p><p>If you want some compiler magic, linting, PostCSS and ES6 - <a href="https://github.com/vuejs/vue-loader">you got it</a>. Vue extension seems to become a default way of writing public components in Vue.js 2. By the way, idea of scoped CSS of component, working out of the box, is something that looks <i>really</i> nice and can reduce need in proper css hierarchy naming and technologies like <a href="http://getbem.com/">BEM</a>.</p><p>Vue.js has pretty simple and useful state and props management in core, via data() and props() methods - they work great in real world. Better separation of concerns available via <a href="https://github.com/vuejs/vuex">Vuex</a>  (which is to my understanding similar to Mobx in React - with some mutation of state involved).</p><p>I think a good percent of Vue.js use cases won't ever require such a state management as Vuex provides, but it is always good to have an option.</p><h3 id="cons-of-vuejs">Cons of VueJS</h3><ol><li>The biggest one: not descriptive runtime errors in templates. This is pretty similar to Angular 1. Vue.js manages to give a lot of useful warnings for your JS code, for example there are warnings when you try to mutate props, or using data() method incorrectly - the good influence of React can be very well seen here. But runtime errors in templates are still a weak point of Vue - exception stacktraces in a lot of times are not useful and are leading into Vue.js internal methods.</li><li>The framework is young. No stable community components - a lot of them were built for Vue.js 1, and it is sometimes not easy for newcomers to see from github repo which version of Vue the library is built for. This issue is leveled by the fact that you can do huge things in Vue without any additional libraries - you will probably just need some ajax library (<a href="https://medium.com/the-vue-point/retiring-vue-resource-871a82880af4#.si9301ufo">vue-resource will be a good choice if you don't care about isomorphic apps, axios otherwise</a>), and probably vue-router which is considered as core library with good support.</li><li>Chinese comments in code across most of community libraries - this is not surprising, Vue.js is getting very popular in China (the author speaks Chinese).</li><li>Single-guy project? Not exactly a real issue, but something to consider. Evan You is the guy who built Vue, after working at Google and Meteor. Laravel also used to be a single-guy project, it is still a huge success, but you never know..</li></ol><h2 id="vue-js-in-drupal">Vue.js in Drupal</h2><p>Disclaimer: we do not plan to use Drupal 8 any time soon at Qwintry, since we are switching to faster and simpler PHP&amp;Node.js frameworks, and our legacy codebase is Drupal 7. <br>Since our legacy system qwintry.com is powered by Drupal, it was very important for us to test this new framework here, in the wild. I am not proud by a lot of code in our legacy codebase, but it works and generates our revenue, so we respect it, improve it, and build a lot of new features here. Here are the list of things I've built in Vue&amp;Drupal already: <br>In-place node editing for complex order entities. This includes generating invoices for customers, and quick edit of product items. It required building basic JSON api for loading and saving of nodes - nothing too fancy, just a few menu callbacks. <br>Two REST-powered dashboards for proprietary Saas software we are using so our customer support does not have to login to separate websites to quickly check information related to specific customer - everything is built right inside customer profile in our website now.</p><p>I know a lot of backend developers are still stuck in 2010 and Drupal 7 core Ajax system. <br>I know how complex Drupal might be when you try to build some fancy multi-step ajax interaction form using core features - it's just crazy hard to maintain this code later. Yes, I am looking at you, <code>ctools_wizard_multistep_form()</code> and you, <a href="https://api.drupal.org/api/drupal/includes!ajax.inc/function/ajax_render/7.x"><code>ajax_render</code></a>! <br>At the same time, these Drupal developers are pushed forward by modern world UI requirements, but they might be scared by increased complexity of modern JS frameworks. Yep, it's me a year ago. Let me tell you - you won't find a better time and way to improve your interfaces than get Vue.js now, put it in your /sites/all/libraries, add it via <code>drupal_add_js()</code> to your template, and start hacking around. You will be shocked how easier it is to maintain a number of pure JSON callbacks sitting in your <code>hook_menu</code>  when the client side, including forms, is completely powered by Vue.</p><h2 id="vue-js-in-yii2">Vue.js in Yii2</h2><p>Fun fact: Yii was created by a Chinese speaking guy - Qiang Xue. So, you might call Yii+Vue stack not just very difficult to pronounce, but also a Chinese stack :) <br>For our new version of Qwintry.com (not yet public) we chose Yii2 which I believe is one of the best and fastest PHP frameworks available. It is definitely not as popular as Laravel which is rocking the PHP world now, but we are <a href="http://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared">pretty happy with our Yii2 stack now</a> (though we are looking at Laravel from time to time, the guys are doing a great job). <br>We are gradually reducing amount  of html generated by Yii2 and PHP, and concentrating more on REST backend which generates JSON for our client-side which is powered by Vue.js. It is mostly API-first approach for all our Active Record models. <br>Here, we are taking API seriously and that's why we are spending a lot of time building good API documentation even though it is only used in-house. <br>With PHP7&amp;latest MySQL the response times of our Yii2 JSON backend does not differ much from Node.js backends (15-20ms are the numbers I am talking about), so it's more than enough for our needs, and 10-20 times faster than we could imagine to achieve using Drupal. At the same time, it's good&amp;old PHP with all the power of composer libraries and stable codebase at our hands. <br>So, Yii2&amp;Vue.js responsiveness is huge, and in terms of code it's a pleasure to work with.</p><p>We are also using Vue.js in a number of internal projects.</p><h2 id="conclusion">Conclusion</h2><p>We've been writing Vue.js code every day for around 3 months in various projects, with impressive results. 3 months is nothing in backend world, but it is something in JS world :) We'll see how it goes further.</p><p>I expect Vue to become a primary JS framework in 16-24 months if Evan You makes right steps, at least around backenders and smaller teams of frontenders. I still consider React stack to be the primary JS framework of 2017, especially if React Native manages to mature and improve itself with same pace it used to.</p><p><b>UPD:</b> this post got to frontpage of HackerNews and there is useful discussion with 200+ comments there: <a href="https://news.ycombinator.com/item?id=13151317">https://news.ycombinator.com/item?id=13151317</a> <br>It also got to top posts of Reddit webdev, 70+ comments here:  <a href="https://www.reddit.com/r/webdev/comments/5ho71i/why_we_chose_vuejs_over_react/">https://www.reddit.com/r/webdev/comments/5ho71i/why_we_chose_vuejs_over_react/</a></p>]]></content:encoded></item><item><title><![CDATA[Rotating original file for image field in Drupal 7 and dealing with browser cache]]></title><description><![CDATA[Rotating original file for image field in Drupal 7 and dealing with browser cache]]></description><link>https://pixeljets.com/blog/rotating-original-file-image-field-drupal-7-and-dealing-browser-cache/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc4</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sat, 09 May 2015 20:03:18 GMT</pubDate><content:encoded><![CDATA[<p>While working on new Qwintry.com tasks we needed to provide our operators the interface to rotate uploaded images (and I wanted to rotate the original image file). Surprisingly, I could not find anything like that among d.org modules so I have to come up with my own solution. I was expecting to finish this task by 1 hour, but, as it often happens, the way to right solution took a bit longer.</p><p>For the final code scroll down to the end of the post, since now I will be showing some ugly code that you don't really need :)</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/rotating-original-file-image-field-drupal-7-276.png" class="kg-image"></figure><!--kg-card-end: image--><p>These links are created in the node--[type].tpl.php in my theme:</p><!--kg-card-begin: code--><pre><code>&lt;a href="&lt;?= url('admin/rotate/' . $node-&gt;field_photo[LANGUAGE_NONE][0]['fid'] . '/cw/field_photo') ?&gt;"&gt;&amp;#8635; Rotate CW&lt;/a&gt;  
&lt;a href="&lt;?= url('admin/rotate/' . $node-&gt;field_photo[LANGUAGE_NONE][0]['fid'] . '/ccw/field_photo') ?&gt;"&gt;&amp;#8634; Rotate CCW&lt;/a&gt;
</code></pre><!--kg-card-end: code--><p>the first version of the function:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
function bdr_warehouse_rotate_image($fid, $direction, $field_name) {  
  $file = file_load($fid);

  $img = image_load($file-&gt;uri);
  image_rotate($img, $direction == 'cw' ? 90 : -90);
  image_path_flush($file-&gt;uri);
  $result = image_save($img);
  if ($result) {
    $nid = db_query("SELECT entity_id FROM {field_data_{$field_name}} WHERE {$field_name}_fid=:fid", array(':fid' =&gt; $fid))-&gt;fetchField();


    db_query("UPDATE {file_managed} SET filesize=:size WHERE fid=:fid", array(':size' =&gt; $img-&gt;info['file_size'], ':fid' =&gt; $fid));
    db_query("UPDATE {field_data_{$field_name}} SET {$field_name}_width=:width, {$field_name}_height=:height WHERE {$field_name}_fid=:fid LIMIT 1", 
      array(':width' =&gt; $img-&gt;info['width'], ':height' =&gt; $img-&gt;info['height'], ':fid' =&gt; $fid));

    db_query("UPDATE {field_revision_{$field_name}} SET {$field_name}_width=:width, {$field_name}_height=:height WHERE {$field_name}_fid=:fid LIMIT 1", 
      array(':width' =&gt; $img-&gt;info['width'], ':height' =&gt; $img-&gt;info['height'], ':fid' =&gt; $fid));


    cache_clear_all("field:node:$nid", 'cache_field');
    drupal_set_message('Image rotated! Use ctrl+f5 if the image preview is not rotated - it is your browser cache.');
  }

  if (!empty($_SERVER['HTTP_REFERER'])) {
    $mark = strpos($_SERVER['HTTP_REFERER'], '?') === false ? '?' : '&amp;';
    drupal_goto($_SERVER['HTTP_REFERER'] . $mark . 'refresh=1');
  }

}
?&gt;
</code></pre><!--kg-card-end: code--><p>Take note that we flush image presets cache, node field cache, and also update the sizes of the picture in three places - in file<em>managed table, in field</em>data<em>{field</em>name}  table, and in field<em>revision</em>{field_name} table.</p><p>But it turned out that it is not enough so I had to come up with some way to force chrome to reload picture, so we pass ?refresh=1 to the url and we had this in our init hook:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
function bdr_warehouse_init() {  
  if (!empty($_REQUEST['refresh'])) {
    drupal_add_js(drupal_get_path('module', 'bdr_warehouse') . '/scripts/img_refresh.js');
  }

}
?&gt;  
</code></pre><!--kg-card-end: code--><p>and the contents of the javascript file were like:</p><!--kg-card-begin: code--><pre><code>function updateQueryStringParameter(uri, key, value) {  
  var re = new RegExp("([?&amp;])" + key + "=.*?(&amp;|$)", "i");
  var separator = uri.indexOf('?') !== -1 ? "&amp;" : "?";
  if (uri.match(re)) {
    return uri.replace(re, '$1' + key + "=" + value + '$2');
  }
  else {
    return uri + separator + key + "=" + value;
  }
}

setTimeout(function() {  
    // force image refresh in browsers after rotation
  var x = document.querySelectorAll(".field-type-image a, .image-widget-data a");

  var i;
  for (i = 0; i &lt; x.length; i++) {
    x[i].href = updateQueryStringParameter(x[i].href, 'rand', new Date().getTime());
  }  


  var x = document.querySelectorAll(".field-type-image img, .image-widget img");
  var i;
  for (i = 0; i &lt; x.length; i++) {
    x[i].src = updateQueryStringParameter(x[i].src, 'rand', new Date().getTime());
  }

}, 300);
</code></pre><!--kg-card-end: code--><p>I had to enclose the image refresh function to setTimeout because Drupal was throwing 503 errors if the image was reloaded too quickly (it is the image.module locking mechanism at work). BTW, don't use this updateQueryStringParameter since awesome <a href="http://benalman.com/projects/jquery-bbq-plugin/">jquery.bbq</a> is included in drupal 7 core :)  (but you dont need this js piece of code at all in the final version so read further)</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/rotating-original-file-image-field-drupal-7-277.png" class="kg-image"></figure><!--kg-card-end: image--><p><br>I also realized that I want to show the rotate buttons in the node edit, in the  image file widget, as well.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/rotating-original-file-image-field-drupal-7-278.png" class="kg-image"></figure><!--kg-card-end: image--><p>Here comes the second version of the function which renames the image file by adding some symbols at the end of the file (and it forces chrome to reload the original file and all the image presets, a lot more reliably than the javascript refreshing mechanism above). Finally, it works! And we can get rid of all javascript mentioned above.</p><p>Here is the</p><h1 id="final-code-">Final code:</h1><p><strong>the main function:</strong></p><!--kg-card-begin: code--><pre><code>&lt;?php  
   function bdr_rotate_image($fid, $direction, $field_name) {
  $file = file_load($fid);

  $img = image_load($file-&gt;uri);
  image_rotate($img, $direction == 'cw' ? 90 : -90);
  $result = image_save($img);
  if ($result) {
    $uri = $file-&gt;uri;
    $ext = substr($uri, -3); // Change this if you expect some weird extensions like .jpeg !
    $new_uri = substr($uri, 0, -4) . '_1' . '.' . $ext;
    file_move($file, $new_uri);
   // it is not completely ok to pass php variables into sql just like that, but we do it here since the input is always safe in my case
    $nid = db_query("SELECT entity_id FROM {field_data_{$field_name}} WHERE {$field_name}_fid=:fid", array(':fid' =&gt; $fid))-&gt;fetchField();


    db_query("UPDATE {file_managed} SET filesize=:size WHERE fid=:fid", array(':size' =&gt; $img-&gt;info['file_size'], ':fid' =&gt; $fid));
    db_query("UPDATE {field_data_{$field_name}} SET {$field_name}_width=:width, {$field_name}_height=:height WHERE {$field_name}_fid=:fid LIMIT 1", 
      array(':width' =&gt; $img-&gt;info['width'], ':height' =&gt; $img-&gt;info['height'], ':fid' =&gt; $fid));
    cache_clear_all("field:node:$nid", 'cache_field');
    drupal_set_message('Image rotated!');
  }

  if (!empty($_SERVER['HTTP_REFERER'])) {
    drupal_goto($_SERVER['HTTP_REFERER']);
  }

}
?&gt;
</code></pre><!--kg-card-end: code--><p><strong>the node edit field widget override with rotation links</strong> (put in your theme template.php):</p><!--kg-card-begin: code--><pre><code>&lt;?php

function YOURTHEMENAME_image_widget($variables) {  
  $element = $variables['element'];
  $output = '';
  $output .= '&lt;div class="image-widget form-managed-file clearfix"&gt;';

  if (isset($element['preview'])) {
    $output .= '&lt;div class="image-preview"&gt;';
    $output .= drupal_render($element['preview']);
    if (user_access('create page content')) {
        $output .= "&lt;br&gt; &lt;a href='" . url('admin/rotate/' . $element['#file']-&gt;fid . '/cw/' . $element['#field_name']) . "'&gt;&amp;#8635; Rotate CW&lt;/a&gt;";
        $output .= "&amp;nbsp;&amp;nbsp; &lt;a href='" . url('admin/rotate/' . $element['#file']-&gt;fid . '/ccw/' . $element['#field_name']) . "'&gt;Rotate CCW &amp;#8634;&lt;/a&gt;";
    }
    $output .= '&lt;/div&gt;';
  }

  $output .= '&lt;div class="image-widget-data"&gt;';
  if ($element['fid']['#value'] != 0) {
    $element['filename']['#markup'] .= ' &lt;span class="file-size"&gt;(' . format_size($element['#file']-&gt;filesize) . ')&lt;/span&gt; ';
  }
  $output .= drupal_render_children($element);
  $output .= '&lt;/div&gt;';
  $output .= '&lt;/div&gt;';

  return $output;
}

?&gt;
</code></pre><!--kg-card-end: code--><p>The registered path in YOURMODULE_menu hook:</p><!--kg-card-begin: code--><pre><code>&lt;?php

$items['admin/rotate'] = array(
    'title' =&gt; 'Rotate',
    'page callback' =&gt; 'bdr_rotate_image',
    'page arguments' =&gt; array(2, 3, 4),
    'access arguments' =&gt; array('create page content'),
  );
?&gt;
</code></pre><!--kg-card-end: code-->]]></content:encoded></item><item><title><![CDATA[Building scalable IT system for delivery from US to Russia: Drupal, Symfony2 and Yii2 compared]]></title><description><![CDATA[Building scalable IT system for delivery from US to Russia: Drupal, Symfony2 and Yii2 compared]]></description><link>https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc3</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sat, 20 Dec 2014 13:15:35 GMT</pubDate><content:encoded><![CDATA[<p>I was not posting to the blog for a long time, and finally it’s time to share my experience with new project. This post will also cover some badly structured thoughts about PHP frameworks :)</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="http://pixeljets.com/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-271.png" class="kg-image"></figure><!--kg-card-end: image--><p>Qwintry Logistics (I will use QWL further in text to avoid typing these words again) is an IT system which provides our b2b customers (US stores and freight forwarders) a new way for a high-quality and affordable delivery to Russia, with simple API for integration. We are not directly competing with monsters like USPS or UPS for cross-border delivery, but we are doing very similar thing here - delivery from stores/warehouses to customer door (or pickup point - which in many ways can be more convenient for the end customer than courier delivery to door).</p><p>Under the hood the system connects Qwintry warehouses (pickup hubs, if we use right terminology) in Oregon and Delaware, multiple US companies for trucking freights to airports, IATA agents and companies that book air freight for us, customs brokers (to do customs clearance in Russia), and multiple delivery companies in Russia/Belarus/Kazakhstan. All these multiple partners are connected into single workflow, and we get beautiful tracking for each individual package as a result, without investing billions of dollars to build our own infrastructure (that's where we differ from UPS and USPS):</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared-272.png" class="kg-image" alt="Tracking for package from US to Russia"></figure><!--kg-card-end: image--><p>At the same time, the IT system and some know-hows and experience in automation of fulfillment and logistics, that we got while working on Drupal-powered <a href="http://qwintry.com">Qwintry b2c website</a> (freight-forwarding business with 100k+ registered customers)  allow us to get the amazing quality and speed while keeping costs low and pricing matrix attractive.</p><h2 id="framework-selection-drupal8-symfony2-yii2">Framework selection: Drupal8, Symfony2, Yii2</h2><p>When we decided to start building the QWL website, our team had plenty of experience in Drupal 5/6/7 and some dirty hands in Drupal8, and decent experience in Symfony2 (several big projects on air), with my mind still being restless due to the fact that none of these frameworks completely met following requirements for the future QWL system:</p><ol><li>being blazing fast for authorized users and API calls</li><li>configuration in code, no configuration in db</li><li>ability to write less code, with the code being beautiful, clean, and fun to write</li><li>being simple PHP platform to quickly hire developers without skyrocketing budgets</li></ol><h2 id="why-not-symfony2">Why not Symfony2?</h2><p>While a lot of members of our team was (and still are) in love with sf2 - I was not, since writing sf2 code was never a fun for me - never even remotely looking like fun. Yes, I'm lazy, and I'm probably not this perfect developer which sf2 is built for :) <br>I realized that the level of sf2 complexity and over-engineering is too big for me - very clearly - when I noticed that I kept forgetting what was the business (real) task I was trying to solve - just after five minutes of staring into a bunch of routing or configuration files in our not-too-complex project, or into Factory to generate some basic form. I just dived into the code and tried to solve issues that sf2 was throwing in my face most of time. Not business issues, but issues related to level of perfect abstraction Symfony2 provides me. I think that's a fundamental difference between me and sf2 - is that I want to solve real business issues quickly without diving too much into this game of absolute extend-ability and clean dependency-injectionability this great and 100% testable framework provides :) <br>My opinion can change some day, but right now I believe that most projects around me and processes around me are not that complex, but they still have tight deadlines and the quicker I can provide real value to the business with my code - the better.</p><h2 id="why-not-drupal">Why not Drupal?</h2><p>At the same time all the problems of Drupal -</p><ul><li>big db structure, where writing some raw db queries is a pain - you do ten joins to get basic data from 10 fields - (don't even try to explain that to financial analytics who are great in what they do, but have very basic SQL knowledge and read-only access to db),</li><li>bad performance for authenticated users and very bad performance in node_load/node_save,</li><li>ugly code you need to write (well, comparing to Wordpress it's a great code, but lets face it - it is not good comparing to modern frameworks with good ORM/AR),</li><li>configuration in db</li></ul><ul><li>were still there, and we were very well aware about all these issues since Qwintry.com is a big project with 30+ custom modules created by our team - starting from Endicia/USPS integration and ending with our own coupons engine, full-fledged order picking system UI, and highly customized referral program - it's all there, and we've been maintaining it for several years with impressive number of incoming packages and orders, so every line of code that could fail - failed at least several times :)</li></ul><p>In big projects during relatively long maintenance periods you quickly realize that this Drupal code has issues:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
$order-&gt;field_paid[LANGUAGE_NONE][0]['value'] = 1;
node_save($order);  
qwintry_log($order-&gt;nid, 'Order was paid');  
?&gt;
</code></pre><!--kg-card-end: code--><p>Because sometimes node_save with single changed field still fail due to innodb locks or whatever, and you at least need to wrap it into try { } catch blocks like this:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
$order-&gt;field_paid[LANGUAGE_NONE][0]['value'] = 1;
try {  
  node_save($order);
  qwintry_log($order-&gt;nid, 'Order was paid');
} catch (Exception $e) {
  drupal_set_message('Critical warning for operator');
  qwintry_log($order-&gt;nid, 'Exception thrown during invoicing');
}

?&gt;
</code></pre><!--kg-card-end: code--><p>Why am I posting this example? Because when your projects grow that big so you have everyday problems like that (and you write a lot of custom code for new features, so your custom code base size is comparable to framework size) - it may be a good sign that you probably need some lower level framework (lower than Drupal), where transactions are there for all objects, working mostly automatically, and where it is a lot cheaper in terms of cpu and speed to save objects to db in 'traditional' way of framework - so you don't do ugly db<em>insert or db</em>query("INSERT INTO {table}") instead of node_save when you need to process just 30-40 objects at a time.</p><p>I was even considering plain PHP, but it would be stupid in 2014 to invent own wheel :) <br>Another idea was to use Laravel, and I'm pretty sure it would be a viable option - never heard bad things about it - not too complex, but still elegant and fast. <br>But, instead of that, we decided to try Yii2 which was in alpha quality by the time when we started - and it turned out to be a great framework.</p><h2 id="why-yii2">Why Yii2?</h2><p>Historically, <a href="http://habrahabr.ru/post/207814/">Yii is the most popular PHP framework in Russia</a>, and probably in China (no links to chinese IT websites, sorry) - since the author of the framework, Qiang Xue, is Chinese (lives in US). It is not hugely popular in US, as far as I know.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-266.png" class="kg-image"></figure><!--kg-card-end: image--><p>Big popularity is not a sign that the framework is good (or bad) - but it usually means you can find developers easier and faster. Finding good Drupal developers is a big issue. Finding sf2 developers is easier, but still can be a problem. Finding Yii developers is easier than finding sf2 developers :) You have to be careful while testing them since the learning curve is not that big in Yii - so there can be quite a number of bad programmers among these guys - those PHP guys that  Ruby and Python devs make jokes of  :)</p><p>I have a lot of friends in Russia who have been using Yii1 for years, but during these years I've been successfully doing Drupal projects and we honestly never had a lot of projects where custom code base can be bigger than framework+contributed modules code base, and good speed under highload was not a big factor, so for that type of projects it would be stupid (and too expensive) to use Yii. <br>I still think that for those kind of projects (where you don't write big - and 2-3 small custom modules+customized theme is not big - amounts of custom code) Drupal is a perfect engine. You basically develop most of stuff at the speed of prototyping, you have access to huge amount of great contributed modules and to drush - it's just amazing how quick the complex web development can be nowadays. <br>But now, when I look back and think of projects that we've implemented in Drupal and which had huge amount of development hours invested in it (thousands of hours on code writing) - I think that Drupal was not the perfect solution for these - it gives more issues than value.</p><p>So, I used Yii1 in my sandboxes for some time. Than, when we finally needed some PHP framework for serious code writing - we started to use Symfony2 <a href="http://pixeljets.com/blog/drupal-7-vs-symfony-2-overview-after-1-year-symfony-development">(read my blog post about this selection here)</a> - the Yii1 already felt outdated, while Yii2 was not even remotely ready for usage. And the Symfony2 was exciting to try, giving us the new modern tools like Composer and Twig.</p><p>The Yii core devs decided to rewrite second version of their framework almost from scratch, just like Fabien did with Symfony2. It was a long-lasting project, and there was a risk to loose all the community that will switch to fresh sf2, Laravel and Phalcon and never come back. But now I think it was worth it (and I don’t think Fabien regret his sf2 rewrite, as well). The beta of Yii2 was released in April of 2014 (and it was a raw beta). We started working with Yii2 and QWL project in March of 2014 so we had several funny issues related to alpha quality of code - but all of them were resolved with later releases.</p><p>So, in QWL, we needed great performance, and the code was mostly custom (a bunch of external APIs to integrate; a huge bunch of docs to generate from db objects - mostly for customs brokers; and a need for good own API) - so it was obvious that it's not where Drupal shines. <br>And it was a great moment to try Yii2 since it was close to production quality already. <br>New kid on the block, with all the goodness of modern PHP development: composer, namespaces, autoloading, great ActiveRecord, prepared for REST APIs building. All that was implemented by slim and very simple framework core, and it works with a great <em>real</em> speed.</p><p>What is a real speed? I use that term to distinguish between speed of cached page performance and speed of page performance without cache. <br>In Symfony2 (which is not fast at all, comparing to Yii) you can't even disable all the levels of caching, so I had a number of issues with that - when your routing annotations are cached, when your Yaml configuration is cached, when your Doctrine objects and schemas are cached, and when your Twig templates are cached, and when your DI <em>LazyServiceMapPass</em> is probably cached - and you can't disable all these cache layers completely in development environment due to the complexity of framework features - it's easy to get lost when debugging something, and if you delete all the cache - the framework will be surprisingly slow while warming up all that.</p><p>In Yii2 you can disable all caching, and the dev environment with all the debug panels will still be fast! You can (and you should) cache everything in production to be super-fast, but it's really amazing how fast it is in development mode. It was one of the reasons to choose this framework. <br>Another reasons:</p><h2 id="1-configuration-of-components-is-dead-simple-and-is-written-in-php-">1) Configuration of components is dead simple, and is written in PHP:</h2><p>if you see something like</p><!--kg-card-begin: code--><pre><code>&lt;?php  
'sms' =&gt; [  
     'class' =&gt; 'app\components\sms\SMSC',
     'user' =&gt; 'username',
     'key' =&gt; 'xxx'
 ],
?&gt;
</code></pre><!--kg-card-end: code--><p>in website configuration file - you know that you can access this object through Yii::$app-&gt;sms and it's implementation is a class like this:</p><!--kg-card-begin: code--><pre><code>namespace \app\components\sms;

class SMSC extends Component {  
  public $user;
  public $key;
  /* some methods here */
}
</code></pre><!--kg-card-end: code--><p>so configuration params are transformed into class properties by default, and you are not required to write code for additional processing to transform configuration directives to something the class can use. Yes, it can be restrictive, and Symfony2 developers can give a bunch of examples where such configuration approach is lacking flexibility and may be it is not exactly self-documenting, but in most cases that's what you need - in most cases you don't want to know a heck about DI service compiler passes, and you don't want to create additional XML with service definition (which will be cached, remember? good luck with typos in these xmls) - you just want to create this component and start using it right away.</p><h2 id="2-most-of-decisions-are-made-by-core-developers">2) Most of decisions are made by core developers</h2><p>The Yii2 core feels monolithic compared to sf2 (so core developers made own Logger, own ActiveRecord, and great RBAC permission system is also in core) - but it felt good when I was doing real work, comparing to alternatives with bigger amount of features (using Doctrine+ACL). <br>I like the fact that Bootstrap3 is there in core of Yii2 and there are widgets to do less typing (e.g. \yii\bootstrap\Modal for modals). I have had experience injecting Bootstrap3 into Drupal7 and I can't say it's a big pleasure to do.</p><p>At the same time, the culture of contribution in Yii is weak comparing to Drupal world. <br>You don't use a lot of contributed modules when you build Yii project - you mostly rely on the core and your own code. Of course, there are a lot of contributed modules but their number is ridiculously low comparing to number of Drupal user-contributed modules - at least, that was my impression. The level of extendability is significantly lower in Yii2 comparing to Drupal, but it didn't bug me a lot since I know how complex and slow infinite extendability can get. Yii2 is still in many ways better extendable than Drupal - for example, it's very easy to use your own user and session processing, and customize the registration forms - and this flexibility is more important in big projects than a way to inject something in theme hooks or change menu routing details via hook<em>menu</em>alter (Drupal shines here).</p><p>Now, in Yii2, most of contributed modules are hosted on GitHub (and Packagist - for composer).</p><h2 id="3-activerecord-is-a-lot-less-typing-than-orm-like-doctrine-">3) ActiveRecord is a lot less typing than ORM like Doctrine.</h2><p>Active Record is a concept that somehow mixes the object representing each table row and a super-object that can be used to retrieve specified table rows - in single class. Sorry for this terrible explanation, but here is a simple example:</p><p>Drupal 7</p><!--kg-card-begin: code--><pre><code>$shipment = node_load(db_query("SELECT entity_id FROM {field_data_field_tracking} WHERE field_tracking_value=:tracking", [':tracking' =&gt; $tracking])-&gt;fetchField()); // yes, I know about db_select, but using it in a bit more complex cases looks even uglier than that 
$shipment-&gt;field_is_sent[LANGUAGE_NONE][0]['value'] = 1;
node_save($shipment);  
</code></pre><!--kg-card-end: code--><p>Yii2</p><!--kg-card-begin: code--><pre><code>$shipment = \app\models\Shipment::findOne(['tracking' =&gt; $tracking]);
$shipment-&gt;is_sent = 1;
$shipment-&gt;save();
</code></pre><!--kg-card-end: code--><p>Symfony2</p><!--kg-card-begin: code--><pre><code>$em = $this-&gt;getDoctrine()-&gt;getManager(); // if you're in controller
$shipment = $em-&gt;getRepository('PixeljetsShipmentBundle:Shipment')-&gt;findOneBy(['tracking' =&gt; $tracking]);
$shipment-&gt;setIsSent(1);
$em-&gt;flush();
</code></pre><!--kg-card-end: code--><p>In ORM (Doctrine) the super-object whose job is to fetch rows from db is EntityRepository, while each db row is represented by Entity object, it means you need separate class for Repository and separate class for Entity, so in the code above the hierarchy of ORM is: <br>ShipmentRepository -&gt; Shipment</p><p>In yii2 ActiveRecord this super-object concept is mixed into ActiveRecord class which also holds event handlers related to each-db-row objects and stuff like that. <br>Obviously, Active Record is created by developers who are lazy and hate typing and creating a bunch of classes, while ORM is more academic, loosely coupled, supporting SOLID, and beautiful concept, which works better for enterprise development. <br>But the resulting code of retrieving objects (and this is the code that you write ten times per day) of ActiveRecord is more elegant and short.</p><p>The underlying code of ActiveRecord is probably less than 10% of Doctrine code size and you can understand how it works in one day. Then try to explore the Doctrine code some day (I only did a quick overview, and yes, no surprise it's big and complex)</p><p>Some beautiful ideas of Yii2 AR I think were borrowed from Eloquent (Laravel AR), for example the relations:</p><!--kg-card-begin: code--><pre><code>&lt;?php

class Shipment extends ActiveRecord {  
  function getAuthor() {
    return $this-&gt;hasOne(User::className(), ['id' =&gt; 'user_id']);
  }
}
?&gt;
</code></pre><!--kg-card-end: code--><p>now you can get owner of shipment:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
$shipment = Shipment::findOne($id);
echo $shipment-&gt;author-&gt;username; // separate lazy sql query against users table executed here  
?&gt;
</code></pre><!--kg-card-end: code--><p>or, you can do it via join:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
$shipment = Shipment::find()-&gt;where(['id' =&gt; $id])-&gt;joinWith('author')-&gt;one(); // select * from shipment left join user ... 
echo $shipment-&gt;author-&gt;username; // no sql query here!  
?&gt;
</code></pre><!--kg-card-end: code--><p>Isn't it beautiful and expressive? <br>While I always had problems remembering the arguments and names of Drupal zoo of db functions (in D7 it got even worse with EntityFieldQuery and db_select - I don’t really want to repeat myself so read more about this <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”http://pixeljets.com/blog/drupal-7-vs-symfony-2-overview-after-1-year-symfony-development”">in my previous post</a>) - it really lacked simplicity, straight-forwardness and "it works for everything in the system" impression, - and while I didn't like Doctrine DQL and had bad experience extending this DQL syntax to add some condition to my search query - Yii2 simplicity was like a miracle for me. <br>You can fallback to raw sql in conditions when you need, still getting AR objects as a result:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
// get 20 shipments which are older than 1 month or which author email is test@test.com 
$q = Shipment::find()-&gt;joinWith('author')-&gt;where(['author.email' =&gt; 'test@test.com']);
$q-&gt;orWhere('shipment.create_time &lt; :time', [':time' =&gt; time() - 60*60*24*30]);
$shipments = $q-&gt;limit(20)-&gt;all();

foreach ($shipments as $shipment) {  
  echo $shipment-&gt;tracking;
}

?&gt;
</code></pre><!--kg-card-end: code--><p>By the way, Yii2 supports transactions out of the box, and you can save objects reliably and be sure that you don't leave your db in a bad state.</p><p>Other things to note in Yii2:</p><h2 id="bower">Bower</h2><ul><li>jQuery is in core but it is not glued into the code - it is downloaded as a Bower package, and the idea of Bower manager support via Composer looks very nice (minor issue: it seems to slow down the process of composer update process). It means that Yii2 modules can specify dependencies to bower packages. (Bower is a package manager for frontend libraries, mostly for javascript libraries - built by Twitter and seems to gain some momentum now)</li></ul><h2 id="db-is-primary-no-schema-generation-from-code">DB is primary, no schema generation from code</h2><ul><li>DB is primary, while the code is secondary. Very important difference comparing to Symfony2 ORM and Drupal schema: if you remove your database completely, in Symfony2 you can get full schema of db from your ORM models, in Drupal you can do the similar thing, in Yii you can't do that easily - the AR code 'learns' about your db structure on the fly.  You need some good db admin software to do db design - but I think that SQLyog interface is superior to writing Drupal schemas by hand or using Symfony2 console utility. The Doctrine2 ability to generate ALTER queries from changed models still looks very cool :)  In Yii2, to do migrations, you go to db admin, alter the table using its ui, and copy&amp;paste the resulting ALTER to migration file, that is later pushed to git. In db there is a separate table to remember migration states. This approach is pretty simple and gets things done.</li></ul><h2 id="template-layer">Template layer</h2><ul><li>the core template engine is plain PHP - the only thing I don't like here is that it's easy to forget to escape some user data, and it's too much typing to escape everything:</li></ul><!--kg-card-begin: code--><pre><code>&lt;?= \yii\helpers\Html::encode($var) ?&gt;  
</code></pre><!--kg-card-end: code--><p>I don't like this, so I made a shortcut function:</p><!--kg-card-begin: code--><pre><code>function e($var) {  
  return \yii\helpers\Html::encode($var);
}
</code></pre><!--kg-card-end: code--><p>and now in templates I use</p><!--kg-card-begin: code--><pre><code>&lt;?= e($var) ?&gt;  
</code></pre><!--kg-card-end: code--><ul><li>still not as beautiful as {{var}} in Twig, and not OOP, but I can live with it (for now).</li></ul><p>BTW, there is a <a href="https://github.com/yiisoft/yii2-twig">Twig integration for Yii2</a> - maintained by Yii core devs. Not tried it yet.</p><p>The big difference between Drupal templating and Yii2 templating is that Yii2 templates mean more typing and copy&amp;pasting when you start - since you need to add real template (view) for each form, and for each page, and for each model CRUD, while in Drupal when you create new node type or simple form you don’t need to take care of creating any template, you need to think of it only if your need some fancy html structure. <br>At the same time, I found it easier to manage html structure of complex form in Yii2 than in Drupal7 (which has a great theme layer - but you can write a book on how you do specific things here - and that is a killer for newbies).</p><!--kg-card-begin: code--><pre><code>&lt;div class="row"&gt;  
        &lt;div class="col-sm-3"&gt;        
        &lt;?= $form-&gt;field($model, 'broker_status')-&gt;dropDownList(Shipment::getAvailableBrokerStatuses(), ['prompt' =&gt; '']) ?&gt;
        &lt;/div&gt;
        &lt;div class="col-sm-4"&gt;        
        &lt;?= $form-&gt;field($model, 'broker_message')-&gt;textarea() ?&gt;
        &lt;/div&gt;
 &lt;/div&gt;
</code></pre><!--kg-card-end: code--><p>to do this basic styling in Drupal you have to fight with form<em>alter (add #field</em>prefix to mimic the required structure) or <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”https://www.drupal.org/node/350634”">override template completely</a> which means you need to add separate theme callback - and while it is not a rocket science - lazy developers may feel a bit upset when designer says that he needs this element floating there  and he can’t do it just via CSS, using default Drupal form html structure. <br>In Yii2 you go this extra-mile at the start, using Gii which generates CRUD for models from db table, and then you have precise control over the html, it is easy to modify and support - and all the the templates in the system work the same way - and you know exactly in which file some html structure of registration form located. In Drupal, during maintenance period of some complex project, you always need some time to figure out how this specific div was generated - was it some form_alter from contributed module adding it, or was it your div from your custom theme?</p><p>In Symfony2, the Twig engine is great, and I can’t say anything bad about it - it is similar to Yii2 approach. I’m afraid the Twig won’t save Drupal8 theme layer from complexity unless devs are ready for big functionality degradation (because Drupal flexibility is complex, and if you convert all these theme_ callbacks to .twig files - you will also get performance degradation that will slow down Drupal even further - and cache cleanup will be a bigger pain than it is now.. but we’ll see the result when Drupal8 is released)</p><h2 id="forms">Forms</h2><p>In Yii2 forms are usually created on the base of some model. <br>That is a nice concept - ActiveRecord and form models have validation rules described in same syntax - <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”https://github.com/yiisoft/yii2/blob/master/docs/guide/input-validation.md”">with rules() and scenarios() methods inside model class</a>. Note that the amount of typing is minimal, and you can pass anonymous function for custom validation rule - and it’s again no redundant typing, and quick result.</p><p>Client side validation is automatically generated from validation rules described in model class.</p><p>The nice ‘side effect’ of same validation rules used not just for forms, but for base Model class is that you can launch your validation for some ActiveRecord object simply by calling</p><!--kg-card-begin: code--><pre><code>&lt;?php  
if (!$shipment-&gt;validate()) {  
$errors = $shipment-&gt;getErrors(); 
}
?&gt;
</code></pre><!--kg-card-end: code--><h2 id="menu-routing">Menu routing</h2><p>When you need to create another route in Drupal7, you need to implement hook_menu() and append your new path to the registry of routes:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
function devel_generate_menu() {  
  $items = array();

  // Admin user pages
  $items['admin/generate'] = array(
    'title' =&gt; 'Generate items',
    'description' =&gt; 'Populate your database with dummy items.',
    'position' =&gt; 'left',
    'page callback' =&gt; 'system_admin_menu_block_page',
    'access arguments' =&gt; array('administer site configuration'),
    'file' =&gt; 'system.admin.inc',
    'file path' =&gt; drupal_get_path('module', 'system'),
  );
  return $items;
}

?&gt;
</code></pre><!--kg-card-end: code--><p>in Symfony2 you do the similar thing but there are plain routes there, no such metadata like menu title. So, you need to create xml file like this:</p><!--kg-card-begin: code--><pre><code>&lt;?xml version="1.0" encoding="UTF-8" ?&gt;

&lt;routes xmlns="http://symfony.com/schema/routing"  
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://symfony.com/schema/routing http://symfony.com/schema/routing/routing-1.0.xsd"&gt;

    &lt;route id="fos_user_registration_register" pattern="/"&gt;
        &lt;default key="_controller"&gt;FOSUserBundle:Registration:register&lt;/default&gt;
    &lt;/route&gt;
&lt;/routes&gt;  
</code></pre><!--kg-card-end: code--><p>or, you can deal with annotations and specify routes via annotations in controller:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
class PageController extends Controller  
{
    /**
     * @Route("/pages/{id}.{_format}", name="pages_view"})
     */
    public function viewAction($id)
    {
    // action code here
    }
}
?&gt;
</code></pre><!--kg-card-end: code--><p>I loved the idea of annotations at the first sight, but in practice, in development it turned out to be a pain due to caching issues (when something specified in annotations do not work, you have to cleanup cache every time you change your annotation). <br>The fact that you can specify routes with different ways across the same project is also frustrating for me.</p><p>By the way, in Drupal8 the way to specify menu routes <a href="https://www.drupal.org/node/2116767">is pretty much the same as #1 way in Symfony2 - via [modulename].routing.yml file.</a></p><p>and now the typical Yii2 routing:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
class PageController extends Controller  
{
   public function actionView($id) {
   // action code here
   }
}
?&gt;
</code></pre><!--kg-card-end: code--><p>that's right - you don't need to write anything to get the route into the routes registry, it's just the convention that action* methods in controller are turned into routes. In this specific case the ?id=xxx query parameter is also required and system throws 403 error if it is not there. Of course, you need to add access callbacks to restrict access but you can do it later when you need it, via separate conventional methods in the controller - it is done by attaching <a href="https://github.com/yiisoft/yii2/blob/master/docs/guide/concept-behaviors.md">behaviors</a>. <br>In development you don't even need to cleanup cache so new action starts working! <br>This whole idea that routes are not described separately can be shocking, but it works pretty well and it's a breeze when you add a lot of ajax callbacks with separate paths.</p><p>...</p><p>You also get  CLI interface (which is easy to extend by putting controller in special folder), <a href="https://github.com/yiisoft/yii2/blob/master/docs/guide/security-authorization.md">access control</a>, various logger types (logging to files, db, syslog, email with granular setup is available), basic web ui for rapid html and model generation (called gii), mongodb support, memcache and redis support for caching (file caching is default), sphinx support.</p><h2 id="implementation-of-specific-features-">Implementation of specific features:</h2><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-270.png" class="kg-image"></figure><!--kg-card-end: image--><p>Some notes regarding API authentication: in Yii2 this part is pretty well thought through and there is <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”https://github.com/yiisoft/yii2/blob/master/docs/guide/rest-authentication.md”">a separate handbook page for this</a>. In QWL, when customer uses the web panel - he is logged in through sessions and cookies, and during the api call the current user object is created on different authenticator mechanism - header tokens in our case - and this didn’t require a lot of developer time to implement.</p><p>Another interesting task was creating a dashboard where operator can take a quick look at the packages that require special attention (if they are sitting for too long in pickup point, or they have active issue flag, or some status in tracking is incorrect and unexpected, etc). That is a very important feature that allows us to monitor the health status for each part of logistics chain.</p><p>We have pretty big and advanced dashboard for operators on Qwintry b2c website (Drupal powered), and it is implemented  using Panels and Views. It has 15+ panes with misc stats and to make it open in adequate time that human can wait (2 seconds) we had to move most of panes to ajax loader (that was a custom hack for Panels) - otherwise the loading times could be 10+ seconds.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-269.jpg" class="kg-image"></figure><!--kg-card-end: image--><p>We also implemented flexible file storage for documents - when package is created - all the files are put to local filesystem for faster access and preview, and when the shipment is marked as “delivered” - all the files are moved to S3 for long term storage, to save local file space. <br>We’ve created File AR model with ‘storage’ field and when the ‘storage’ field is changed to s3 - files are copied to s3 - during afterSave hook of the model. We’ve used <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”https://github.com/aws/aws-sdk-php”">AWS sdk for php</a> for this, and very thin Yii2 component so s3 is available as a service.</p><p>We’ve also integrated littlesms.ru api for SMS notifications - and it was also available as a service throughout the system so sms are sent like this:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
Yii::$app-&gt;sms-&gt;sendMessage($phone, $message)  
?&gt;
and in project config it is:  
&lt;?php  
'sms' =&gt; [  
            'class' =&gt; 'app\components\sms\LittleSMS',
            'user' =&gt; 'user111',
            'key' =&gt; 'pw'
        ],
?&gt;
</code></pre><!--kg-card-end: code--><p>when littlesms.ru was discontinued we switched to SMSC api, but we didn’t change any code using the sms service, we just created new (very simple) component on the base of SMSC code and changed the system config:</p><!--kg-card-begin: code--><pre><code>&lt;?php  
'sms' =&gt; [  
            'class' =&gt; 'app\components\sms\SMSC',
            'user' =&gt; 'userSMSX',
            'key' =&gt; 'pw2'
        ],
?&gt;
</code></pre><!--kg-card-end: code--><p>both components had same method sendMessage (it could be the same Interface with sendMessage method or even an AbstractClass that both classes inherit) so the change was seamless for the remaining code.</p><p>Another cool feature of QWL is implementation of AMQP protocol for queues of tasks (unfortunately, <a href="https://pixeljets.com/blog/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-compared/”https://github.com/yiisoft/yii2/issues/492”">still nothing like that is implemented in core</a>). <br>For example, when we need to generate documents for broker, it can be an archive with 200+ pdf documents. <br>We use mpdf for pdf generation, and it is not quick at all, unfortunately. So it’s obvious that this is  a long running task that needs to be executed asynchronously.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-267.png" class="kg-image"></figure><!--kg-card-end: image--><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/sites/default/files/building-scalable-it-system-delivery-us-russia-drupal-symfony2-and-yii2-268.png" class="kg-image"></figure><!--kg-card-end: image--><p><br>We use this task queue when sending sms and email messages, generating pdf documents, generating document archives, and in other places.</p><p>The QWL also has a registry of courier companies and a set of methods so final delivery company is selected on the base of multiple parameters (pickup point selected by customer, address city, box size, box weight, etc) - but that's a topic for separate post. <a href="http://logistics.qwintry.com/map">Pickup points map</a> contains merged locations from multiple courier companies.</p><h2 id="conclusion-on-framework-selection">Conclusion on framework selection</h2><p>Yii2 is simple, and if you have good PHP background - it will be easy to learn. <br>When I was learning it, and looking into how things are done here, I got this great feeling that authors of Yii2 did a lot of stuff exactly the same way I would do it in a "perfect framework" that I would create. <br>Many times my thought was “These guys are lazy just like me” :) this great balance of simplicity and flexibility, the balance between having fun writing code and still good code readability in long term maintenance is hard to achieve. <br>I’m pretty sure some developers would say that Yii2 is a bit quick&amp;dirty, and not perfectly testable (by the way, there is a Codeception integration in core of Yii2 development package - so it is definitely testable!), and services and configuration management are too simple here, and static calls to get services are ugly - most of it is true, especially comparing to sf2 - and I’m ok with that opinion - it’s just the matter of personal preference, experience, targets, number of developers in the project, and project type. I try to think about business targets most of time when we write our code. I saw lots of great talented developers that dream about 100% test coverage and spend hours discussing which PHP DI component is better - and then they fail meeting the deadlines or just distort the real task so it does not comply to customer needs, but fits into perfect architecture of system instead. <br>At the same time, nobody wants to get ugly unmaintainable code in repo, me neither.</p><p>It’s all about balance :) and for me, Yii2 seems to have good balance. <br>QWL project is in production, it’s used by customers, and it’s a pleasure to maintain and extend it. We'll see how it goes. <br>So, for our team, it was a period of ~7 years of mostly Drupal development (since 2005), and then ~2 years when our team was building Drupal7 projects and Symfony2 projects, and now in 2014-2015 we have Drupal, Symfony2 and Yii2 projects to maintain. Luckily, bigger framework selection in our case does not mean increased complexity of development - it means quite the opposite - the tools now fit the tasks even better and we see better what is the right framework for the future project.</p>]]></content:encoded></item><item><title><![CDATA[Packt Publishing book - Premium Drupal themes]]></title><description><![CDATA[Packt Publishing book - Premium Drupal themes]]></description><link>https://pixeljets.com/blog/packt-publishing-book-premium-drupal-themes/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc2</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sat, 11 Jan 2014 19:30:52 GMT</pubDate><content:encoded><![CDATA[<p>In 2013 I was invited by Packt Publishing to play a role of technical reviewer in one of their books.<br>Just got my sample of the book <a href="http://www.packtpub.com/premium-drupal-themes/book">Premium Drupal Themes</a> by mail.<br>It was an honor and nice experience for me, and now I hope to find time and become an author some day :)</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2019/04/packt-publishing-book-premium-drupal-themes-252.jpg" class="kg-image"></figure><!--kg-card-end: image--><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="https://pixeljets.com/blog/content/images/2019/04/packt-publishing-book-premium-drupal-themes-253.jpg" class="kg-image"></figure><!--kg-card-end: image-->]]></content:encoded></item><item><title><![CDATA[Drupal 7 vs Symfony 2: overview after 1 year of Symfony development]]></title><description><![CDATA[Drupal 7 vs Symfony 2: overview after 1 year of Symfony development]]></description><link>https://pixeljets.com/blog/drupal-7-vs-symfony-2-overview-after-1-year-symfony-development/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc1</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Tue, 15 Oct 2013 15:55:09 GMT</pubDate><content:encoded><![CDATA[<p>We've decided to switch to Symfony2 development in July 2012, if I'm not mistaken - after 7 years of (mostly) Drupal development. <br>There were reasons to do that:</p><h3 id="1-not-too-great-experience-with-high-load-projects-powered-by-drupal-7">1. Not-too-great experience with high load projects, powered by Drupal 7</h3><p>Okay, Drupal is good enough until you get a project with big expectations in terms of response time for authenticated users. Nginx proxy, boost and memcached help a lot with anonymous page hits, but things are sometimes not good enough when we talk about authenticated user saving some nodes (1 field = 2 inserts to fields and revisions table, 40 fields - and you get super-slow node saving, at least on non-ssd hosting), using ajax stuff like dynamic forms (Drupal form ajax is not bad, it's just not super-blazing-fast). <br>Things are getting even worse if you have big website with lots of views and fields, and you just clear the cache on production - your server can become close to non-responsive. <br>That's what happened with us in several projects. <br>Okay, you can avoid using views and write your own replacement for fields which will store data in single table, and own ajax that do not pass all page ids of the page back to the server, - and it will probably be almost as fast as lower level framework code (yii, symfony, whatever) but what's the point in using Drupal then? The code will be uglier (let's face it, Drupal API is not as clean and decoupled as Symfony2 API). The amount of time spent on the development will probably be close.</p><h3 id="2-settings-in-db">2. Settings in DB</h3><p>Drupal was built to allow anyone to create complex website without writing a line of code. <br>It sounds like an unrealistic dream for most of website developers, who are using raw PHP/Ruby/Python/whatever, but not for developers who are using Drupal! Latest version of Drupal and Views is indeed very close to this amazing dream. <br>As a side effect, when you are creating project where you need lots of programming, you are struggling with settings-in-db approach. <br>Of course, there are <a href="http://drupal.org/project/features">Features</a> which allow to do an impressive deployment stuff. <br>But then, when you get a bug in features during deployment, which occasionally deletes fields from user profile on production (we indeed encountered this bug in production once - god bless db backups!), you think "I'd better write my own code, it's too much magic". <br>Still, I think that Features are a brilliant piece of functionality and approach.</p><h3 id="3-one-of-our-biggest-customers-fall-in-love-with-symfony2">3. One of our biggest customers fall in love with Symfony2</h3><p>These guys asked if we'll be able to create new projects using some robust framework, suggesting Symfony2 after making their own research. It was a matter of big projects, thousands and thousands of man hours. Why not?</p><h3 id="4-drupal8-is-using-symfony2-components">4. Drupal8 is using Symfony2 components</h3><p>That was another sign that Symfony is a good choice. <br>Just like we've chosen jQuery or git for all our projects because of the fact that Drupal decided to go with them, and not with mootools, prototype, and mercurial with bazaar. For business needs, it's almost always good to use mainstream technologies, though it's not as geeky and cool as <a href="http://apostrophenow.org/">writing a CMS in node.js</a>.</p><h3 id="5-our-team-and-me-mostly-was-eager-to-learn-new-system-after-7-years-of-mostly-drupal-development">5. Our team (and me, mostly) was eager to learn new system after 7 years of (mostly) Drupal development</h3><p>Oh yeah. The grass is always greener on the other side of the fence, you know :) <br>While Drupal is moving forward, some stuff in PHP world was still relatively new for Drupal in 2012. (Annotations, composer, twig, ...) <br>And it's interesting to use this stuff in real project, not on localhost!</p><p>That's it. Now it's more than a year passed, and you know what? <br>After two months of work with Symfony (we were building big and complex ecommerce solution), I realized that it will be a big mistake to switch over completely.</p><p>Drupal is still a very good choice for 85% of websites in the world. You don't need high load and ultra flexibility in 85% (probably, 95%?) of world websites. <br>And Drupal can be a better choice than Symfony2 for all these projects. <br><strong>The main (the only?) reason is - rapid and easy development in Drupal.</strong></p><p>You can create working Drupal website with the speed of prototyping (if you are good Drupal developer with good experience). <br>Okay, all the hardcore features still require coding, but you can really deliver  working stuff in hours, not days of work. <br>Symfony is good, but it's a lot more manual work. The good sign of the fact that this problem is real - is the existence of <a href="http://rad.knplabs.com/">Symfony RAD edition bundle</a>.</p><p>I have a good example - right now when we hiring Symfony2 developer, we ask them to build basic website which contains just registration (basic stuff, FOSuserbundle, we ask to remove username field, just leave email and password) + profile (firstname, lastname) + Facebook login (HWIOAuthBundle).</p><p>The average time of implementing this by developers is ~7 hours. And most of the time is spent in writing and testing pretty big yaml configurations for bundles. <br>The same stuff in Drupal will take ~2-3 hours - drupal install, email_registration module, and <a href="https://drupal.org/project/hybridauth">https://drupal.org/project/hybridauth</a> or similar for facebook login. <br>So, it's harder to do "rocket launch" in Symfony.</p><p>Some things that are available in Drupal out-of-the-box (like, permission system) are not easy to do in Symfony2. Typically most of sf2 projects contain pretty ugly checks against current user role. And it's a bad practice comparing to permissions checking - if you use role checking and you need to add another role which can do the same action, you will have to modify code to add this new role. <br>Of course, you can implement some kind of security voter or ACL in sf2, and it can be smarter and more flexible than in Drupal, but nobody wants to mess with it without REAL reason. And that's what I call poor engineering - when "right" choices are too painful so most of developers tend to stick to "wrong" choices.</p><h3 id="symfony-components">Symfony components</h3><p>Symfony components (Doctrine ORM, HttpKernel, Forms, Routing, Doctrine annotation parser, ...) leave good impression - they are completely decoupled, and it is really a way to go in terms of architecture, not surprising that Drupal8 uses a lot of Symfony components.</p><p>At the same time some third-party bundles leave impression that their creators <a href="https://github.com/raulfraile/LadybugBundle/issues/37">are in love with over-engineering</a>, not in getting things done.</p><h3 id="orm-and-db-queries">ORM and db queries</h3><p>Drupal 7 dev team made a lot of efforts to build own ORM-like code for entities (<a href="https://drupal.org/node/1343708">EntityFieldQuery</a>) but Doctrine2 ORM and query builder are far more robust and polished, and play nicely with Symfony models and annotations, this part is really beautiful in sf2 - that's a no brainer that Symfony is stronger than Drupal in this part. <br>Writing your own db queries in code is a pleasure in Symfony, and not a big pleasure in Drupal 7, especially if you are writing 'raw' SQL - the main reason is that Drupal 7 fields are a victim of own flexibility (they allow you to change single-value field to multiple-value on the fly, and they are also designed for multilingual websites)  - and it requires separate table for each field, while in Symfony2 you have to decide if the field is single-value (than it's a field in a model table) or multiple-value when designing your db structure. Multiple-value field is typically not a field, but a separate model in sf2. <br>The Drupal 7 approach is good when you use it from admin interface, but not that good when you dig into code, you need three joins against tables with ugly long names like "field<em>data</em>field<em>send</em>invoice_paid" just to build a condition to check three fields of a node (unless you use EntityFieldQuery, which designed only for loading nodes - so it's really a performance hit to use it, that's why it's not too popular in our code).</p><h3 id="third-party-infrastructure">Third-party infrastructure</h3><p>Whole third-party infrastructure of sf2 relies on Github, Composer and Packagist heavily, and it leads to difficulties in tracking which version of the package is good for your installation - for example, some packages can break when you go from Symfony2.1 to Symfony 2.2. <br>If you still develop in production (ouch!), I bet you will spend a lot of hours trying to understand what to write in composer.json to return your website back to life. <br>Of course, it happens mostly because Composer and Github were created to make happy everyone in PHP world, not for specific framework. <br>But for beginner, Drupal module system with 6.x/7.x branches of all modules and themes is way easier to understand.</p><p>Another annoying thing is that deployment to production in Symfony2 is significantly more complex than in Drupal. <br>In Drupal, you commit all third-party modules to single git repo of your project, then you launch <em>git pull, drush updatedb, drush cc</em> on production and you are (typically) good.</p><p>In Symfony2, all vendors (third-party modules) are not stored in central git of your project - since most of them have own git repos. <br>So, when you deploy to production, you need to launch <em>composer install</em> and wait while all packages will update (and your production will be offline during this time). <br>And if some package git repo is offline - you are in trouble.</p><p>Of course, there is a <em>right</em> way to do the deployment - <a href="http://capifony.org/">Capifony</a>, based on ruby-powered Capistrano. But you need time and efforts to get used to it.</p><h3 id="code-maintenance">Code maintenance</h3><p>But when you're done with complex project in Symfony2, I would say it's way easier to maintain it over time, and the code is cleaner and easier to understand by another developer. Though, for me, it's still not a big joy to write lots of these OOP classes when you are developing stuff. You need time to get used to it. <br>The fresh example of what I don't like in sf2: <br>If you want to output standard translated message "The item was created" to flash (flash is session-stored message, the equivalent of drupal<em>set</em>message() in Drupal) you need to put: <br><code> </code><br><code>$this-&gt;get('translator')-&gt;trans('flash.create.success', array(), 'JordiLlonchCrudGeneratorBundle'); </code> <br>Good luck trying to type it in without editor shortcuts.</p><p>and if you compare it to <br><code> </code><br><code>t('The node was created successfully'); </code><br> <br>you probably get the idea of the difference between Symfony and Drupal which frustrates Drupal developers :) <br>Yes, that's a global function t() - it's not super architecturally correct comparing to translator service in Symfony, which you can easily override when you need, but.. I don't care about overriding translator in every project, but I still use standard translator in every project, and I usually use crud generator, and it's really <em>a lot</em> of typing! <br>Every time when I write this, I wonder, if this nice guy, JordiLlonch, creator of CrudGenerator bundle, was thinking about all the developers who will have troubles remembering his name, especially 'iLl' part? :) <br>Of course, you can create your own version of global t() (or extend base controller and add YourController::t function) in Symfony2. <br>But <br>1) you will have to do it in every project <br>2) you will have to explain to your team that there is a non-standard shortcut <br>3) (in case you do it via global function) you will have this bad feeling that you are going against architecture of your framework :)</p><h3 id="performance">Performance</h3><p>Symfony is definitely not about "we have lean and mean code which is fast". <br>It's about "we have good, a bit bulky, but good-for-everyone architecture and 5 levels of caching, and then we have another level that caches cache of cache". <br>Let's just say that single Doctrine ORM has 3 types of cache. <br>And you can't completely disable caching of everything, even in dev environment. <br>The only reliable way to test configuration/annotation changes is to delete cache every time you modify it.</p><p>Symfony2 is significantly faster than Drupal. It implements lazy-loading and autoloading everywhere and it's nice - memory footprint is relatively small. You can update the memory limit - instead of 128mb (drupal) you can (typically) use 64mb (sf2) for the same functionality. <br>Doctrine is also relatively fast in production. It is clever enough to make single big update/insert for several objects when you call flush(), instead of ugly multiple node_save which can take lots of time to complete in Drupal.</p><h3 id="conclusion">Conclusion</h3><p>Symfony2 is more "academic", OOP, professional and serious, faster in terms of performance, easier to maintain for a team of developers. <br>Drupal is more fun, less code to write, and faster development and deployment, sometimes quirky, quick, and dirty. It quickly moves to OOP in it's parts, while other parts are still procedural. That's what I can call 'dirty', too.</p><p>The same project will cost roughly twice as much developed in Symfony, comparing to Drupal, and will take twice as much time. <br>I think the learning curve is approximately the same for Drupal7 and Symfony2.</p><p>So, we're still doing Drupal development now. <br>At the same time, we were lucky to hire pretty talented sf2 developers, so we now do a lot of Symfony2 projects (5 complex projects so far) and even our Drupal team start enjoying it - but these are still two separate teams, one doing Drupal, one doing Symfony2. And I'm pretty sure that knowledge of Symfony2 will help us when we start doing Drupal8 development - that's what I call synergy :)</p><p>upd: due to good search engine positions spammers seem to love this post, so comments are now disabled.</p>]]></content:encoded></item><item><title><![CDATA[Storing monetary amounts in db? Use decimals, not floats!]]></title><description><![CDATA[Storing monetary amounts in db? Use decimals, not floats!]]></description><link>https://pixeljets.com/blog/storing-monetary-amounts-db-use-decimals-not-floats/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cc0</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Thu, 13 Dec 2012 05:01:58 GMT</pubDate><content:encoded><![CDATA[<p>Lot's of people will be surprised (or shocked, when it happens in production) when they see that mysql can work like this:</p><p>Query: <br><code>SELECT transaction_id, amount FROM transactions WHERE amount&gt;20.5</code></p><p>Response: <br><code>123|20.5</code></p><p>Query: <br><code>SELECT transaction_id, amount FROM transactions  WHERE amount=20.5</code></p><p>Response: <br><code>No rows</code></p><p>So, 20.5 is really greater than 20.5 in sql database. Sometimes.</p><p>Oh yeah, it looks like <a href="http://drupal.org/project/balance_tracker"><code>balance_tracker</code></a> developers were not aware of that (until today). So, it's like a bomb with timer for hundreds of websites which rely on this and other modules which use floats to store monetary-like amounts in db and are doing some sql selects which compare values stored in db against something. In our case, it was 20.563333333... stored in balance, which prevented our auto-payments system (built on top of balance_tracker+ubercart) in our big project from working properly.</p><h2 id="how-does-it-happen">How does it happen?</h2><p>When you calculate some value in PHP (and you use floats in PHP, of course):</p><!--kg-card-begin: code--><pre><code>&lt;?php  
 $total = 3 / 2; // total is now 1.5555555555... in PHP
?&gt;
</code></pre><!--kg-card-end: code--><p>and store it in sql table in cell of type float, it is stored as 1.55555555555.. <br>(but it shows as 1.5 when you are looking at the value using phpMyAdmin or SqlYog - which is understandable, but really adds confusion!)</p><p><b>That's why you need decimals.</b> When you put 1.55555555.. in cell of type decimal, it is stored as 1.55. So, it works as most usual people expect it to work when dealing with financial stuff. <br>And when you use Drupal fields to store monetary amounts in your nodes, use decimal as field type, too!</p><p><strong>UPD:</strong> another good (best, I guess - but requiring more work) solution (as suggested in comments) would be to use integer column in mysql and operate with integers in PHP/SQL, only rendering values as floats to website users.</p>]]></content:encoded></item><item><title><![CDATA[Rules won't work properly when run during cron, if you use node access restrictions]]></title><description><![CDATA[Rules won't work properly when run during cron, if you use node access restrictions]]></description><link>https://pixeljets.com/blog/rules-wont-work-properly-when-run-during-cron-if-you-use-node-access-restrictions/</link><guid isPermaLink="false">5cb48194106efa4dc9d36cbf</guid><dc:creator><![CDATA[Anton Sidashin]]></dc:creator><pubDate>Sat, 20 Oct 2012 09:30:54 GMT</pubDate><content:encoded><![CDATA[<p>I've recently created USPS tracking module for Drupal, so <a href="http://qwintry.com">Qwintry.com</a> users could get notifications when their international packages change state. I've used <a href="http://api.drupal.org/api/drupal/modules!system!system.queue.inc/group/queue/7">queue operations</a> to build requests to USPS API by cron, and it seems to work great for our customers, but this story is not about the module.</p><!--kg-card-begin: image--><figure class="kg-card kg-image-card"><img src="http://pixeljets.com/sites/default/files/rules-wont-work-properly-when-run-during-cron-if-you-use-node-access-restrictions-217.png" class="kg-image"></figure><!--kg-card-end: image--><p>My plan was to provide rules event "The package [tracking number] changed active state from [old state] to [new state]". (words in square brackets are Rules arguments). <br>On Qwintry website, "Package" is a node, with "tracking number" textfield.</p><p>So, basically, my Rule was:</p><!--kg-card-begin: code--><pre><code>if [tracking number] changed active state from [old state] to [new state]:  
   - fetch entity by property "tracking number" = [tracking number]   
   - send [author of loaded entity] a nice email about state change
</code></pre><!--kg-card-end: code--><p>Easy, huh?</p><p>I've implemented <code>_rules_event_info()</code> hook in my module code, and created my Rule. <br>The rule worked perfectly when I triggered the event using my admin account, but..</p><p>.. it didn't work when I run website cron to trigger my event.</p><p>event was triggered but it couldn't find the node with such property (though I knew that I have node with such tracking number in db).</p><p>After hours of debugging I found out that the issue here is that <strong>"Fetch entity by property" action of Rules uses EntityFieldQuery, which of course respects node access permissions and checks current user access. My "package" nodes were private for their owners. And cron uses anonymous user to trigger the event! So, cron didn't have enough permissions to load the nodes.</strong></p><p>That makes perfect sense (for lots of use cases!) to check node access unconditionally, but in my case it was a big trouble. <br>I think that the perfect solution for the issue would be an "ignore access permissions" checkbox in "fetch entity by property" Rules action.</p><p>I've created an issue in Rules issue queue: <a href="http://drupal.org/node/1804586">http://drupal.org/node/1804586</a> but didn't get any replies yet.</p><p>I hope this will save some time to someone else!</p>]]></content:encoded></item></channel></rss>