<script> tags. Some might say they power the internet.
There are all different kinds of marketing
<script> tags. There are simple ones that you just embed in the global footer and are done with. Then, there are complex ones that fire when a user “converts” and ask you to send back the total order amount, a list of products, tax and shipping costs and the user’s mother’s maiden name.
Adding and removing these tags is such a common task for any website that there are entire platforms (see: Google Tag Manager) created just for the point of doing so. They introduce a new knowledge domain and buzzwords like data layer.
That’s all fine and dandy, but what happens when they don’t work? Below is a story where a marketing pixel caused a big problem, and a proposal for how we can help makes these backbones of the internet a little less dangerous.
I first heard about Prometheus on an episode of The Changelog Podcast. Before tuning in, I read the description and was intrigued. Monitoring is an important part of my day job where my team is responsible for ensuring the technical end of operations runs smoothly for many large scale ecommerce businesses. I saw that the episode featured an engineer from SoundCloud (a service I use regularly for streaming music) and decided to give it a spin.
In my work at Something Digital I’ve recently taken a deep dive into profiling and improving performance, at scale, of the search results page (
/catalogsearch/result/index). In our case, we have a client whose traffic profile is very search heavy, and ran into performance issues due to a traffic sure to that route. The investigation was very interesting, and I thought it would be beneficial to document some of the key findings here.
Recently, I've been doing some work with Magento Enterprise's "Rule-Based Product Relations" feature, or, as it's called in the source code,
At Something Digital we have a client with an interesting requirement that involved some customization to the module. As a result, I spent some time digging into the module's mechanics. Since technical documentation is sparse, I figured I'd share my learnings for anyone interested to benefit.
When a team of people, both technical and non-technical, collectively operate a shared software installation things are bound to go wrong at some point. As the technical folk we are often engaged to perform forensic analysis. This type of work frequently includes tasks such as
grep-ping server access logs for certain request paths, dates, and IP addresses or reviewing any other logs or information related to whatever incident may have occurred.
This post is about a specific incident that came up recently. It was not a major one, but there were some learnings for me along the way and I figured it would be interesting to document the process.
I don't know about you, but I'm not a fan of typing a command into the terminal and then nothing happening. Generally, any command that I know might take a while I'll run in verbose mode. That's great, I know that the process really is running. But what I don't know how far along it is...and how much more there is to go.
Today I found out about the
pv command. Boy, is that a game changer...