My Sublime config

It doesn’t take long to prepare for a sublime config. Since I am always a big fan of sublime, been using it on every machine and every project I have encountered with. I realized that every time I put up an IDE environment, it took me some time to remember and collect handy packages I am familiar with. So I decided to paste my sublime config in order to future auto-provision the environment.

To start with, go to Package Control: Advanced Install Package

Then paste below comma-separated package list,

And most importantly in all step above, just to check the Package Control installation script in

https://packagecontrol.io/installation

 

About cubit bezier function And animation in CSS

Animation in CSS is the key to create engaging UI. To start with, here is the list of animation properties CSS supports.

animation-delay Specifies a delay for the start of an animation
animation-direction Specifies whether an animation should be played forwards, backwards or in alternate cycles
animation-duration Specifies how long time an animation should take to complete one cycle
animation-fill-mode Specifies a style for the element when the animation is not playing (before it starts, after it ends, or both)
animation-iteration-count Specifies the number of times an animation should be played
animation-name Specifies the name of the @keyframes animation
animation-play-state Specifies whether the animation is running or paused
animation-timing-function Specifies the speed curve of the animation

Normally how to setup animation in CSS is by @keyframes.

A typical example would be in AnimateCSS git repo.

But today I will focus on one particular property named animation-timing-function, and its value cubic-bezier.

Cubic-bezier is a maths concept which lays a curve on 1×1 area axes. The curve has 4 points in control, [(0,0), (x1, y1), (x2, y2), (1,1)].

The cubic-bezier only took 2 middle points’ coordinates to determine the curves. To illustrate what it does, a graph from here clarifies the solution.

So the above line translates to cubic-bezier(.27,1.26,.66,-0.36).

Just bear in mind this is only a speed function of timing, reverse on Y axis only means traverse back into the animation process. So this way it can easily achieve the “bouncy” simulation.

Lastly, another interesting site to visit is https://easings.net/. It has the nice illustration of how most curves behave with an buildin animation demo.

Happy animating!

Posted in CSS

My Aria2 Config

Aria2 is popular in downloading files.

And RPC mode combine with WebUI makes aria2 more feasible in managing downloads.

(Web UI Repo: git@github.com:ziahamza/webui-aria2.git)

Here is what I have been using for the aria2 conf

 

SMACSS architecture reference for writing CSS

The concept is to keep CSS more organized and more structured, leading to code that is easier to build and easier to maintain. SMACSS is one of the reference guides about CSS naming rules and conventions.

To begin with, there are 5 major categories from SMACSS point of view in CSS functionalities.

  1. Base – fundamental default element display features, like body, form, input or a
  2. Layout – decide how page is divided into sections
  3. Module – reusable parts like modal, sidebar, etc
  4. State – define how module will change based on different context (different views/devices)
  5. Theme – layer of theming, not necessary

For each of these categories, there are rules associated with it. Bear these simple minds with it,

  1. Base rule is default style, no need to use !important, and Reset CSS is helpful
  2. Layout has major & minor. Major layout are mostly shared across whole site, recommend using ID selector.
  3. Modules should avoid element selectors, and use class instead. And sub-classing is the key to increase portability on modules
  4. State should not only apply to layout/module, but also indicate javascript dependency. !important is recommended at this level.
  5. State change can be done in 3 ways, class name, pseudo-class and media query

Having said all these basic category concept and rules, there are actually some thinking go beyond.

Minimize the CSS depth – for better maintenance and readability/portability

CSS Evaluation flow has below facts, thus recommendation based on the facts,

Facts Recommendations
a. Style of an element is evaluated on the element creation (e.g. document rendered during transmission)

b. CSS is evaluated from right to left

i. Use child selectors

ii. Avoid tag selectors for common elements

iii. Use class names as the right-most selector

Using CSS Preprocessors are encouraged (LESS SASS Stylus)

Posted in CSS

Study about Splunk

Since I know ELK as a starting point, getting to know splunk is mostly a fun experience. But I did spend fair bit of time to compare these 2 products. So to start with,

  1. Splunk is Commercial Paid product while ELK is open source.
  2. Splunk is expensive in license while ELK claimed to be free, but setup requires time-consuming tasks and hardware cost is also a potential concern
  3. Splunk is usually on-prem while ELK can be anywhere.
  4. Both log results will be as good as the data which got sent in
  5. Splunk provider smoother way to parse data by defining the data fields after data is already in the system. However, ELK needs data fields defined before the data is in.

AWS Certified

I am finally AWS certified developer now. I took the Developer Certification – Associate Exam at end of Oct.

In the following post, I may post series of tutorial post talking about AWS training & exams.

In the mean time, I will continue working on AWS Certified Solutions Architect.

Posted in AWS

Learning notes on Terraform and Terragrunt

Terraform/Terragrunt is a powerful tool to provision cloud resources. This post is to talk about some key feature and few tricks about it. Firstly, to describe them in quote,

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules

Both of them are powerful tools and I am supposed to do this post long time ago. Now finally motivated enough to pull together a few bits and pieces I have learned along the way by using terraform in practice. Here is the list.

  1. Remote State & Locking

    This can’t be emphasized enough. Remote state enables trackable resource management and alleviate teamwork conflicts. And most importantly, this feature has been extended by terragrunt so that we can do inherited structure of config setting, which means Don’t-Repeat-Yourself. Take below for example,

    This remote_config setting will support all other backends types by terraform as well.

    And when this file is included when running terragrunt, all belowing terraform settings will be updated.

     

  2. Modularity is king of re-usable code

    This is my favourite feature of terraform. Creating modules means the code can break into multiple levels of granularity. And each level of complexity VS repentance can be easily managed at no cost (except for calling init again for redeclaring the modules before calling plan). A simple setup would looks like below,

    (Note: the double slash (//) is intentional and required. It’s part of Terraform’s Git syntax for module sources. Terraform may display a “Terraform initialized in an empty directory” warning, but you can safely ignore it.)

    This is only the starting point. Some further thinking may be,

    1. Different modules could have different dependencies. Since terraform doesn’t support module level dependency. It could be done by terragrunt.

       
    2. Modules managed by terragrunt requires a lot of tfvars file. Maybe for simpler project managing one module in terraform which gathers all the pieces for other modules will be easier, which essentially is creating one master module on top of other modules. It gives better experience when transfering this project to pure terraform. And it much clearer in the way that tfvars files are defined.
  3. Loop, if-else syntax with magic “count”
    Essentially its terraform Interpolation of variables. Terraform can support both list and map types. Below are some most inspiring examples I have known so far.

    However, there is a BIG catch when using the count. DON’T DEPENDS COMPUTED RESOURCES. Since count is sent before dynamically computing any resources. So at the stage count is calculated, computed resource/numbers won’t be availabe. So you will get below error,

     

  4. Connecting gap between terraform & terragrunt
    1. Including Sequence

      Child block when including will always override the parent block settings if the setting has the same name. However, if the setting has a different name, the two sets will be merged together. And if in this case, the child setting takes higher priority. Two useful functions to mention here,

      find_in_parent_folder() : This function returns the path to the first terraform.tfvars file it finds in the parent folder above current terraform.tfvars file.

      path_relative_to_include() : This function returns the relative path between the current terraform.tfvars file and the path specified in its include block. It is typically used in root terraform.tfvars file so each child module store its state file at different key.

    2. Readable Global VARs

      Some environment variable is visible to both terragrunt and terraform. For example export TERRAGRUNT_IAM_ROLE="arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME" is equivalent to call terragrant with option --terragrunt-iam-role "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME". Further example could be,

      TF_VAR_name

      TF_CLI_ARGS

      TF_INPUT

In summary, using terraform is so much a better experience compared to other alternatives. No never-ending JSON template file and dodgy (at least IMO) DSL syntax.

Here for FYI useful links,

https://github.com/gruntwork-io/terragrunt

https://www.terraform.io/docs/providers/aws/index.html

https://www.terraform.io/docs/configuration/interpolation.html

https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9

First dive into Talend Open Studio

Background Story

Back in the days, I was involved in the development of an automation flow which required a language called BPEL (Business Process Execution Language). It’s the type of development when the GUI is presented with different components. And what is needed for developing flow is drag & drop on canvas.

To be honest, this kind of development looks fresh at the start. But later on, more issues were found during further interaction with these development tools so we have to be forced back to look into the code the trace down the error, since under the “fancy” cover the GUI tools, it’s auto-generating code mechanism which essentially still generates a fairly complex piece of java code.


Coming back to the topic of Talend of this post, it’s related to this particular previous experience because they both are the same type of development mode that works on interactive GUI to design and orchestrate workflows. And perhaps not surprisingly, they are generating JAVA code at the back.

So here is what I have experienced with Talend Open Studio.

The Good
  1. It’s essentially JAVA! The outcome of code generated is essentially java. Having said that, this means if there is a syntax error, or there is a misunderstanding of how code works, we can always look into the code part side right-side of “Designer” panel of canvas and find out the exact reason.
  2. The orchestration flow is clear and easy to read from start given a canvas based design flow illustration, which might be much clean and easy to read than the code. (Opinionated!)
  3. Many featured components to work on instead of implementing them from scratch, e.g. FTP connection, File read and listing and writing, AWS S3 interaction, and data flow processor such as tMap, tNormalize, tUnite, tJavaFlex, tJavaRow.
The Bad
  1. It’s essentially JAVA! The runtime Env is slow due to JVM and requires extra compilation beforehand. Digging through java runtime error is not fun. Code generated is more and more complex when putting more components into the canvas which eventually takes up all the resources.
  2. The context is double edged sword. It provides a clean and neat way of managing passing variables between jobs. But when lack of proper managing context in a much re-usable and clean way (similar to the concept of “eliminating global variables” in writing other codes), the number of them and maintenance overhead could easy be blown up.
  3. Some learning curve is expected when dealing with components like tJavaFlex. It may not work as originally expected when first come to use. And documentation about these components are just terrible all over the internet.
Learnings
  1. Get faster machines with bigger RAM
  2. Managing context with proper plan beforehand
  3. A good way of learning Talend is always trying to use it. It may take some time at the start, but it will always pay back at a later stage. Especially when one particular component is not familiar, put in tons of “System.out.println” will definitely help in understanding the priorities and flow.
  4. Putting in a lot of tWarn as placeholder and logging messages helps understand the application as well as helps program stand in a better position of self-organizing.
  5. Use tRunJob wisely since each job is representing a standalone process. Having said that, this means each job can be run independently and get valid result based on the ENV and inputs.
  6. Distinguish the concept of flow and row. Flow mainly focus on process orchestration while row represents the data stream. Having said that, there are many cases when we need to convert row data into different flow and vice versa. Think wisely. A lot of options here.
  7. Linkage between components such as “main”, “iterate” will pass the data row along the flow.
  8. Linkage between components such as “onSubJobOK”, “onSubJobError”, “onComponentOk”, “run If” will do the trigger once the current component and condition are met.
  9. Passing values between child job to parent job. Context and bufferOutput are commonly used. Be cautious about global variables.
  10. “CHILD_RETURN_CODE” is useful tool to reflex tRunJob running status.
  11. Useful tip as + to trigger lookup for all global variables available at given point.
  12. All exceptions should be handled properly, otherwise, it will be escalated to the top till the process get killed. Same rule as Java.

Docker LEMP stack build

Docker is great. It provides lightweight virtualization with almost zero overhead.

And out here are tons of articles about docker best practice It has short-versioned bullet points as below,

  1. Containers should be ephemeral
  2. Use a .dockerignore file
  3. Avoid installing unnecessary packages
  4. Each container should have only one concern as “one process per container” may not be all-time true.
  5. Minimize the number of layers
  6. Sort multi-line arguments
  7. Build cache

Given docker is so good to have, I took an initiative with the intention of putting general PHP development env into docker. So I built a small git repo about docker lemp stack.

Github Hanswang/docker-lemp

From this repo, we take use of docker-compose.

CORS and cross-origin HTTP requests

It seems that you have already known quite a few about HTTP request and CORS. But there is always something to learn when you look closer

CORS is short for Cross-Origin Resource Sharing. As the word shows, it gives web servers cross-domain access controls over either static or dynamic data. For static data sharing, its typical type of how CDN works. In essence, it adds new HTTP headers that allow servers to describe the set of origins that are permitted to read that information using a web browser. This is the basic background knowledge.

An interesting note about these requests is there are types of “preflighted” ones. The cause of this is many HTTP methods could have side-effects on server’s data. In order to prevent actual data impact, a more elegant solution would be mandating the browser “preflight” the request to get “approval” from the server first by an HTTP OPTIONS request, then sending the actual request with actual HTTP request.

There are conditions to trigger a CORS preflight, falling any of below 3 conditions will need a preflighted request. Otherwise, a simple request will be sufficient to achieve the goal.

  1. Request uses methods other than following

    • GET
    • POST
    • HEAD
  2. Request includes any below headers
    • Accept
    • Accept-Language
    • Content-Language
    • Content-Type (but note the additional requirements below)
    • DPR
    • Downlink
    • Save-Data
    • Viewport-Width
    • Width
  3. If the Content-Type header has a value other than the following
    • application/x-www-form-urlencoded
    • multipart/form-data
    • text/plain

Below is the illustration of how requests are done between client and server.

Here is the full exchange headers info for a preflight request. First is the preflight request.

Second is the actual request for resource.