Author Archives: H N Ramkumar

H N Ramkumar
H N Ramkumar is a Technical Architect at Robosoft and has led the design & developments of many projects across Mac, iOS and Android.
Andriod iOS

Functional Programming Goes Mainstream

Functional programming is a programming paradigm where applications are composed using pure functions, by avoiding shared mutable state, by avoiding side effects and by being declarative. We can contrast this with Object oriented programming where data and behaviour is colocate. Procedural programming is an imperative style which groups algorithms into procedures that tend to rely on shared mutable state. Functional programming allows to compose functionality using functions. Composition is a simple, elegant, and expressive way to clearly model the behaviour of software. The process of composing small, deterministic functions to create larger software components and functionality produces software that is easier to organise, understand, debug, extend, test, and maintain.

Pure Function and Side Effect

In functional programming, we will be building abstractions with functions. Functions serve as black-box abstraction. In this style we prefer pure functions. A pure function is one which given the same input, always returns same output and has no side effects. Side effect is application state change that is observable outside the called function other than its return value. Some of the ways a computation can introduce side effect are

  • Modifying any external variable or object property (e.g., a global variable, or a variable in the parent function scope chain)
  • Logging to the console
  • Writing to the screen
  • Writing to the network
  • Inserting a record into a database
  • Querying the DOM
  • Triggering any external process
  • Calling any other functions with side-effects

Side effects are mostly avoided in functional programming, only controlled side effect allowed, we manage state and component rendering in separate, loosely coupled modules.

For example, the below simple function to compute square root of a given number is pure as it gives same output no matter how many times it has been called with an input, it is devoid of any side effect and all computation is dependent on input value and return value.

/*
    Algorithm:
    sqrt(2)
    Guess       Quotient (Number / Guess)       Average ((Quotient + Guess) / 2)
    1           2/1 = 2                         (2 + 1)/2 = 1.5
    1.5         2/1.5 = 1.3333                  (1.3333 + 1.5) / 2 = 1.4167
    1.4167      2/1.4167 = 1.4118               (1.4118 + 1.4167) / 2 = 1.4142
    1.4142      ...                             ...
*/

function sqrt(x) {
    function average(a, b) {
        return (a + b) / 2.0;
    }

    function square(x) {
        return x * x;
    }

    function abs(x) {

        return x > 0 ? x : -x;

    }

    function sqrt_iter(guess) {
       return good_enough(guess) ?
          guess: 
          sqrt_iter(improve(guess));
    }

    function improve(guess) {
        return average(guess, x / guess);
    }

    function good_enough(guess) {
        return abs(square(guess) - x) < 0.001;
    }
    return sqrt_iter(1);
}

sqrt(2)

On the other hand, consider the below function, checkPrice , it is not pure as it depends on the value minimum which is other than its input.

let minimum = 21;

const checkPrice = price => price >= minimum;

One of the important thing to notice that functional style of programming is amenable to equational reasoning. That is, we can replace the the algorithm to compute square root with an equivalent algorithm and since it is a pure function, rest of the program is not affected. Thus it is a black box abstraction.

Another hallmark of this type of programming is avoiding mutation through assignment. Every mutation creates a new value and hence assignment statement is not needed at all.In fact, the seminal book on computing “Structure and Interpretation of Computer Programs” does not use assignment at all!

Name Isolation Techniques

The parameter names passed to a function are unique to that function. The names of the parameters and local names declared inside a function are called free variables. Names of the functions declared inside a function are unique and isolated to that function. The functions sqrt_iter, improve, good_enough etc in the above example are such inner functions. Parameters to the outer function are made available in inner functions. This is called Lexical Scoping .The parameter xof available in all inner functions. Lexical scoping dictates that a name is looked up in its enclosing functions till one is found. A function can also access a variable value outside it but in the lexical scope, even after the outer function has completed execution. This is called a Closure and is a powerful technique of encapsulation employed in functional programming.

Thus a function effectively forms a module, there by achieving information hiding and name space.

Recursion and Iterative Process

Functional style uses recursion instead of explicit looping constructs. We can model recursion to achieve an iterative process for efficiency and tail call optimisation (that is, if the recursive call is at the last position i.e, tail position, stack frame can be reused) makes recursion optimal. For example, below function to compute factorial of a number is not optimal. It results in stack overflow for large value of n.

function factorial(n) {
    return n == 0 ? 1 : n * factorial(n - 1);
}

On the other hand this recursive version where recursion happens at tail position is optimal and is equivalent to a loop.

function factorial(n) {
    return fact_iter(1, 1, n);
}
function fact_iter(product, counter, max_count) {
    return counter > max_count
           ? product
           : fact_iter(counter * product,
                       counter + 1,
                       max_count);
}

Similarly, every recursive function can be recast to produce an iterative computing.

Higher Order Functions

Functions that manipulate functions – can accept functions as argument or return function as value are called Higher Order Function. Higher order functions serve as a powerful abstraction mechanism. Higher order functions are possible because in functional languages, functions are first class. By first-class, we mean functions are like any other values :

  • They may be referred to using names
  • They may be passed as arguments to functions
  • They may be returned as results of functions
  • They may be included in the data structures

In the below example, makeAdder is a higher order function as it returns a function as value.

function makeAdder(x) {
    return function(y) {
        return x + y;
    };
}

let plus5 = makeAdder(5);
console.log(plus5(10));
let plus10 = makeAdder(10);
console.log(plus10(10));

In the below example, Array.prototype.map is a higher order function as it takes a function as argument.

const cars = [
  {
    make: 'Maruti',
    model: 'Alto',
    year: 2010,
  },
  {
    make: 'Hyudai',
    model: 'i10',
    year: 2018,
  },
];
const makes = cars.map(car=>car.make);

Function Composition

Given two functions f and g which take x as argument, the composition of two functions, f ⊙ g , is defined as g(f(x)) , for functions f, g and h as f ⊙ g ⊙ h as h(g(f(x))) and so on. We can define a variadic function compose in Javascript which performs function composition as described.

const compose = (...fns) => x => fns.reduceRight((y, f) => f(y), x);

Let us define a few functions:

const inc = x => x + 1;
const double = x => x + x;
const square = x => x * x;

We can define a composition as

let composition = compose(square, double, inc);
console.log(composition(5));

which is equivalent to the below function call, but more readable and easy to reason about.

console.log(square(double(inc(5))));

Function composition holds good the mathematical property of associativity, that is

compose(f, compose(g, h)) === compose(compose(f, g), h);

If you notice carefully, function composition takes functions with only one argument. If there are functions with more than one arguments in the composition, we can convert them to one argument functions using a technique known as currying . The concept is simple: You can call a function with fewer arguments than it expects. It returns a function that takes the remaining arguments. When you supply the last argument, the actual computation takes place.

const curry = (
    f, arr = []
  ) => (...args) => (
    a => a.length === f.length ?
      f(...a) :
      curry(f, a)
  )([...arr, ...args]);

function sum(a, b, c) {
    return a + b + c;
}
console.log(sum(2, 3, 4));

let curriedSum = curry(sum);

console.log(curriedSum(2)(3)(4));

Declarative vs Imperative

Declarative programming is way of computing by declaring what something is instead of spelling out how you get it. In imperative programming, we spell out how computation is done in explicit steps via detailed control flow statements. Declarative programs abstract the flow control process (the how gets abstracted away), and instead spend lines of code describing the data flow: that is: what to do. Functional programming, which prefers expressions over statements, uses declarative programming.

Below example of quicksort implementation in Haskell is an epitome of declarative programming

qsort :: (Ord a) => [a] -> [a]
qsort [] = []
qsort (x: xs) = qsort smaller ++ [x] ++ qsort larger
                where
                    smaller = [a | a <- xs, a<=x]
                    larger = [b | b <- xs, b > x]

Consider the below code snippet:

const cars = [
  {
    make: 'Maruti',
    model: 'Alto',
    year: 2010,
    units: 150  
  },
  {
    make: 'Hyudai',
    model: 'i10',
    year: 2018,
    units: 300  
  },
  {
    make: 'Hyudai',
    model: 'i20',
    year: 2015,
    units: 150
  },
  {
    make: 'Maruti',
    model: 'Dezire',
    year: 2012,
    units: 200
  },
];

// Declarative, functional
let totalUnits = cars
    .filter(car=>car.year >= 2015)
    .map(car=>car.units)
    .reduce((acc, cur) => acc + cur, 0)

// Imperative
let totalUnits = 0
for (let car of cars) {
    if (car.year >= 2015) {
        totalUnits += car.units
    }
}

In imperative style, one has to follow along the control flow to deduce what is happening in the code. The operations does not stand out. This becomes problematic in large code bases to reason about the code.On the other hand, in declarative style if one knows the meaning of higher order functions map reduce and filter , the meaning of the program stands out and it becomes that much easier to reason about the code.

Functional Programming is not new

In 1930s, two mathematicians Alan Turing and Alonzo Church produced two different, but equivalent universal models of computation. Both models could compute any function computable by a Turing machine, meaning, given input n, there is a Turing machine that eventually halts and returns f(n). A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. The computation model invented by Church, which is based on function application is called Lambda Calculus. Lambda calculus is the mathematical underpinnings behind functional programming.

Lambda calculus was hugely influential on software design, and prior to about 1980, many very influential icons of computer science were building software using function composition. The programming language Lisp was created in 1958 by John McCarthy, and was heavily influenced by lambda calculus. Today, Lisp is the second-oldest language that’s still in popular use. However, pure functional languages were notorious for being inefficient and hence efficient imperative languages like C grew in popularity. Functional Programming was largely relegated to academic world. But, with the advent of modern computing hardware making functional programs fast enough and functional programming’s ability to tame the complexity of software system caused by shared mutable state, a new interest has been rekindled in FP. Rise of the popularity of Javascript, which is a functional language at its core, has been another reason for the raise of popularity of functional programming.

Indeed, functional programming has become mainstream and it is an interesting time in the area of programming paradigms.

References

  1. Mostly Adequate Guide to Functional Programming – https://mostly-adequate.gitbooks.io/mostly-adequate-guide/content/
  2. Structure and Interpretation of Computer Programs — JavaScript Adaptation – https://source-academy.github.io/sicp/
  3. Composing Software: An Exploration of Functional Programming and Object Composition in JavaScript by Eric Elliott
Read More
Mobile Technologies Opinion

Here’s How Google Voice and Alexa Skills Are Transforming Voice Applications

When Apple launched Siri in 2011, no one predicted the success of voice technology. 8 years later, almost 100 million mobile phones have a voice assistant. While Industry leaders such as Google and Amazon hold a major market share, voice technology is useful across varied industry channels. From banking to the corporate sector, every industry is observing an increasing number of voice integrations to fulfill customer demands.

Let’s explore why voice is popular, its applications, the technical aspects, and best practices.

Why Voice is Popular

The increased awareness, ever-changing user-demand, and improved user-services through voice integration have triggered a digital shift. The increasing need for speed, convenience, efficiency, and accuracy has added to the increased demand for voice optimization in mobile and other devices. Voice assistants are being integrated into several IoT devices such as appliances, thermostats, and speakers which offer an ability to connect to the user.

Here are some reasons why voice technology is popular in multiple operating devices.

  • On average, we can speak 130 words per minute and write only 40 words per minute. This count can be more or less dependent on the user but the idea behind using voice is clear. The speed of speaking is 3 times more than the speed of writing. Users can save time that they have to spend typing.
  • In 2012, the speech errors were 33%, and in 2017, it was reduced to 5%. Due to evolving and improving technology, voice search has become more accurate. So, whenever a user speaks something to instruct the device, it is highly likely for the device to understand the instructions clearly without errors.
  • It is easier to access voice technology through hardware. Due to the high adoption rate and easy availability, several users have smart devices such as smartphones, smartwatches, and smart speakers. The accessibility of voice search and integration is also high, which justifies the popular demand.

Voice Assistant: Use Cases

Voice technology is not just valuable in the entertainment industry. Healthcare can benefit from voice integration as it will help patients manage their health data, improve claims management, and seamlessly deliver first-aid instructions.

Let’s explore further to find out where voice technology can be utilized.

Enterprise Support

Consider the following two scenarios, in which you will also find two separate use cases of voice for the enterprise.

  1. A sales representative is on the market run and he is making notes of the activities and important pointers as he hops from client to client. He is writing it all down but at the back of his mind, he is thinking of how he would have to populate the fields of his office software with this data at a later stage. This is tiring to even think of, right?
  2. Everybody is using office supplies and no one is making an effort to replace anything. This goes on until the inventory runs dry and there are no office supplies to use anymore. Assigning this task to a person is such a waste of manpower because his time can be better utilized.

In the first case, the sales representative can directly utilize the smartphone to populate the software field. As he speaks, everything is entered directly to the database, which can be edited later, but the major part of the job is immediately accomplished.

In the second case, Echo Dot can be utilized as a shared device to order inventory supply. You have to maintain the inventory levels and feasibility of the purchase but wouldn’t it be nice to ask Echo to order a box of pens?

Healthcare Support

Voice assistants can lead to seamless workflows in the healthcare industry. Here’s how:

  1. There are multiple healthcare devices to help you monitor your health. For example, a diabetes monitor which can be integrated into your Alexa. The solution can help you follow your medication, diet, and exercise properly – everything through voice.
  2. Similar to health management, non-adherence to medication is a reason for declining health. Voice control can be integrated into Alexa to remind elders to take medication on time.
  3. Alexa Skills are also being utilized to deliver first-aid instructions through voice. When in an emergency, you don’t have to read it, you only have to hear it and move ahead.

Banking Support

The application of voice technology in banking is obvious. You can check your account balance, make payments, pay bills, and raise a complaint. Although these are basic functionalities to help users accomplish the task in less time without any hassles.

In an advanced version, voice technology in banking is currently being explored in collaboration with NLP (Natural Language Processing) and machine learning. In the future, users can expect advanced support from the voice assistant. You can ask the assistant if you are investing in the right fund and the assistant will present statistics to help you understand the situation.

Automobile Support

Another known application of Alexa Skills and Google Voice is in the automobile industry. It is possible to utilize voice technology integration to offer roadside assistance. The voice commands will help you execute emergency automobile tasks without reading it online or even typing it in the search engine. You only have to call out your phone and ask for help.

Application Chatbots

Amazon Lex is a valuable service offered by Amazon to create voice and text conversational interfaces in an application. This service has a deep learning functionality is achieved through automatic speech recognition, which can automatically convert speech to text. Further, the Amazon Lex has natural language processing abilities which offer engaging customer interfaces for high user satisfaction. Both these features of Amazon Lex makes it easier for developers to create conversational bots in the application.

Google Voice and Alexa Skills

Alexa Skills and Google Voice are extensions that help build voice-enabled applications. These are the tools that enable us to make virtual assistants and speakers.

Alexa Skills

Alexa SkillsImage source

  1. In Alexa, headspace is a known skill. Headspace Campfire, Headspace Meditation, and Headspace Rain are all different skills.
  2. To invoke these skills, the user has to speak an invocation phrase. This can be anything but we are using Headspace as the invocation phrase as an example here. This phrase is necessary as your skills discovery is dependent on the phrase.
  3. It is compared on the web and searches with different utterances such as ask headspace, tell headspace or open headspace. Sometimes, it can trigger even when you say, “Alexa, I need to relax.”
  4. The intent of the voice command is to achieve the task a user is trying. Start meditation. The start is the intent here.
  5. If a user says start <game>, the start is the intent and <game> is the slot.
  6. It is also possible to execute a command with intent such as Alexa, launch headspace. This is similar to opening the home page on the browser.
  7. A Fulfilment is an API logic to execute the action or intent such as start <game>.
  8. Lastly, Alexa has its built-in intent which is not governed by an external function. If you say, “Alexa, stop,” that is an internal command.

Google Voice

Google Voice

Image source

Google Voice is similar to Alexa Skills with some minor differences:

  1. In Google Voice, headspace offers actions (similar to intent), conversation logic is a dialog flow, and the agent is at backends such as headspace meditation or headspace rain.
  2. Agent in Google accomplishes actions – “Ok Google, start headspace meditation.”
  3. You can also use “Ok Google, launch headspace” to start an action with no intent.
  4. Google has in-built intents like “Ok Google, help me relax” which gives Google several options and one of these is headspace.

VUX Design Research

When you are building actions or skills in Google or Alexa, it is necessary to know various research outcomes given below:

  1. Understand how different types of users are interacting with the platform or a specific category.
  2. Understand the intent of the user and know the why of using the voice assistant.
  3. Understand how actions and skills are performed with the help of different phrases or utterance.

VUX Design Research

  1. Be Specific – Every user can consume more graphical information compared to voice. No one can sit through the voice commands for a long time. So, ensure that your app is visually appealing.
  2. The introduction is the Key – When the user starts the action with no intent, “Alexa, launch headspace,” introduce yourself. Welcome them and help them adjust to your application.
  3. Prompt Again – When the user stops responding, prompt again. But, do this without overdoing it, just a short re-prompt is enough.
  4. Include Error Messages – You need to include error and help messages to pass certification. For example, ask the user to keep the mic close for the response (tip).
  5. Include Invalid Inputs – When the voice command is unclear, have invalid responses stored for better interaction.
  6. Immediate Response – Avoid delays. You can’t give a 10-second speech before playing the song user wants. If the user says, “Play XYZ,” simply say “Playing XYZ” and just play. When the user says pause, Pause.
  7. Don’t Include Unavailable Intents – Do not include any intent on the app which is not feasible for voice commands. This can lead to certification rejection.

How to Set Up Voice Skills?

Alexa Skill

1. Start by signing in or logging in to your dashboard and then click on ‘Get Started’ under Alexa Skills Kit. After that, click ‘Add new skill’.

2. As we are creating a custom skill, click on the ‘Custom interaction model’ and add name and invocation name. Name is defined for your dashboard and invocation name is defined to activate the skill on Amazon Echo. Once you have entered the skill information, move to the interaction model and ‘Launch Skill Builder’. Here, add the intent name and utterances. The intent is the intention of the user behind a skill.

For example, if the intent is ‘Hello World’, utterances can be, ‘Hi’, ‘Howdy’, etc.

Save this model and go to ‘Configuration’ to switch to AWS lambda.

3. Now, go to the lambda page and select ‘Create function’. There are several templates for ‘Alexa-SDK’, in our example, we will use ‘Alexa-skill-kit-SDK-factskill’.

Enter the name of the function and select ‘Choose an existing role’ from the drop-down. The existing role is ‘lambda_basic_execution’.

In the ‘Add triggers’, you will find ‘Alexa skill kit’ which will help you add ‘Alexa skill kit’ trigger to the created lambda function.

4. Before moving forward, create a test function and copy the ARN endpoint for the next step.

5. Lastly, complete the process by going to ‘Configuration’ and completing the Endpoint section by selecting ‘AWS Lambda ARN’ and pasting the copied ARN here. Click on the ‘Enable’ switch and you are done.

Google Voice

1. Create a Dialogflow account or log in to an existing account. Click on ‘Create Agent’ and add details such as name, Google project, etc.

2. Go to Intent and click on ‘Create intent’. Give your intent a name and add utterances or sample phrases. (Similar to what we discussed above).

3. In the ‘Response’ field, type the message that you wish for your intent to respond with.

Google Voice

1. Asynchronous Information Gathering

Voice applications are hosted on the cloud and asynchronous gathering of information can improve response time. When extracting information, there are several data points, which are on different server endpoints. If there is latency, you can reduce the experience immediately. Your user would have to watch the Light on Alexa Spin until the data is fetched. This is why it is best to use an API to bring data from various endpoints to one endpoint for real-time interaction.

2. Mapping Conversations

Your answers on Alexa should be updated. If you are confused about what to add, look at the FAQ section of your website. If there are FAQs that can’t be added without graphical representation, it is best to not add these.

3. Support Linguistics

Different users can express themselves differently. For instance, you may ask, “Alexa, book a cab to XYZ,” but your friend may say, “Please book me a ride to the airport.” The difference in linguistics is natural to most of us and we don’t want to change it just to interact with an application. If the application itself adjusts to our linguistics, that is something useful. So, add these utterances in your voice assistant to improve the experience of your users.

4. Use Short Responses

If your responses are long, your users will soon lose interest. It is hard to concentrate when the voice assistant is delivering long commands. Users get impatient and leave the conversation in between. Ensure that your voice responses are short and sweet. Also, understand that the sentence that seems shorter in writing may be irritatingly long with voice. So, take extra measures to remove these responses.

5. Use minimum Choices

If you are giving your user choices, stick to maximum 3 choices. Fewer choices are always better, so keep your options to a minimum, always. If you have more choices, it would be hard for the user to recall 1st choice when the assistant is reading out 3rd or 4th choice.

6. Reduce Pressure

Reduce the pressure on the audience by improving your response time and responses. If your voice assistant stays quiet for a very short duration after speaking, you can put the user under stress. Allow a considerable time or even if you have to keep the wait time short, the next sentence should be supporting. For example, would you like me to repeat the list?

Challenges of Voice Assistants

Voice Tech is Ambiguous

Voice tech is often ambiguous because it fails to reveal what users can achieve with it. With visual tech, it is possible to extract functionality through buttons, images, labels, etc. However, something like this is not possible with voice.

Further, hearing something and retaining it is usually harder than reading and seeing something to retain it. That is why it is hard to know what could be the best way to deliver options if the application has several options for a particular action.

Privacy Concerns

Privacy is a reason of concern for every business and more so for ones that offer voice assistants.

Why? Let’s find out:

While Siri is programmed to your voice and it activates whenever you say ‘Hey Siri’, anyone can ask a question. Siri just doesn’t know the difference between your voice and someone else’s voice.

How it can impact us? Of course, it affects permissions.

Think of a child ordering several toys from Alexa. This would happen without parent authorization. This also means that anyone can misuse your credit card if they are anywhere near your Alexa. Voice technology and assistants are convenient but security concern is a major issue.

Conclusion

The consumer-led era today demands platforms and technologies that can simplify and step up the customer service experience. Voice technology is revolutionizing customer experience and enabling personalization at a whole new level. Proactive strategies based on customer feedback, browsing, and purchase trends have become much easier to implement, enforcing companies to upgrade their customer experience models backed by technology.

Factors like speed, convenience, and personalization are key differentiators that influence customer decisions and buying behavior. And Voice Technologies like, Google Voice and Alexa Skills are facilitating this process matchlessly. With most businesses switching to technology-based solutions, Voice Technology is likely to play a crucial role for organizations driven by customer service excellence.

Read More
Mobile Technologies Opinion

Web APIs: basics every developer needs to know

An API is an interface that makes it easy for one application to ‘consume’ capabilities or data from another application. By defining stable, simplified entry points to application logic and data, APIs enable developers to easily access and reuse application logic built by other developers. It allows for the clear separation between interface and implementation. A well designed API allows its user to rely only on the published public interface, abstracts implementation details. This enables API developer to evolve the system independently of the client and augurs well for the development of highly scalable systems. In the case of ‘web APIs’, that logic and data is exposed over the network.

Now, Web API is the hot currency in the digital world. Organisations like Google, Amazon, Facebook, Salesforce etc., are essentially selling their services via APIs. So:

  • APIs can be among a company’s greatest assets
  • Customers invest heavily on APIs: buying, learning , writing clients.
  • API is public contract, therefore its developers need to honour it.
  • Cost to stop, change using an API can be prohibitive.
  • Successful public APIs capture customers.

Every API in the world follows some sort of paradigm or architectural style like Control Language, Distributed Object, RPC, Resource-based architecture (REST) and query language.

Control Languages provide an economical and efficient way for application programs to control remote process, usually residing in hardware (like firmware of a printer). Hewlett-Packard’s PCL printer language is one such example of control language[1]. Control languages involve sending compact escape sequence codes that are embedded in the data stream between computer and peripheral hardware. These escaped sequence control commands are interpreted by the embedded software and appropriate functionality takes place. Control languages are by its very nature system specific and are not viable for building scalable, general purpose systems.

Remote procedure calls(RPC) allow programs to call procedures located on other machines. When a process on machine A calls a procedure on machine B, the calling process on A is suspended, and execution of the called procedure takes place on B. Information can be transported from the caller to the callee in the parameters and can come back in the procedure result. No message passing at all is visible to the programmer. However, it’s not easy for clients to invoke remote procedure calls. They may establish connections to remote systems through “low-level” protocols like the BSD Socket API. Developers that use these mechanisms must convert the data types defined on the remote computing platform to corresponding types on the local platform and vice versa. This process is called data marshalling. This can be a daunting task because different platforms use different character encoding schemes (e.g., ASCII, EBCDIC, UTF-8, UTF-16, Little and Big Endianness) to represent and store data types. App Developers who work at this level must therefore understand how the remote platform encodes data and how it interprets any byte stream received.

Remoting technologies like CORBA and DCOM have made it much easier to share and use remote procedures. Hewlett-Packard’s Orblite project is one such effort to build CORBA based distributed object communication infrastructure [2]. Orblite infrastructure can be used to communicate between processes running on computer with processes running on hardware devices like digital scanner, printer etc. The pieces involved in the distributed call in Orblite is figure 1. It involves generating common procedure call signature via an Interface Definition Language(IDL) compiler using a contract defined in the IDL language. This process generates Stub in client and Skeleton in server. Both communicating parties must agree on transmittable types before hand and this is usually done using a Common Data Representation (CDR) format. With this setup client and server can be implemented in different technology and hardware stacks. RPC protocol is free to use any transport mechanism like TCP, HTTP, TCP over USB etc

Pieces involved in a CORBA distributed call.

Fig 1. Pieces involved in a CORBA distributed call.

Though CORBA based system is very good improvement over RPC in terms interoperability, there is still a lot tight coupling in terms of IDL and CDR which affects scalability and independent evolution of system. The systems thus developed are also very complex in nature. You can see that in the below figure 2 which traces logical flow of a remote method invocation across all subsystems.

The logical flow of a remote method invocation.

Fig 2. The logical flow of a remote method invocation.

HTTP mitigates many of these issues because it enables clients and servers that run on different computing platforms to easily communicate by leveraging open standards. But the challenge is how can clients use HTTP to execute remote procedures? One approach is to send messages that encapsulate the semantics for procedure invocation. One can use open standards of data representation like XML and JSON to transmit data between client and server. There are many concrete implementations/standards of http based RPC like XML-RPC, JSONRPC and Simple Object Access Protocol (SOAP). Among these SOAP is the most famous. SOAP provides a layer of metadata which describe things such as which fields correspond to which datatypes and what are the allowed methods and so on. SOAP uses XML Schema and a Web Services Description Language (WSDL) for this purpose. This metadata allows clients and server to agree upon the public contract of communication.

For example a SOAP based system for communicating between process running in Desktop and firmware running in a digital scanner will have a WSDL defining operations like – GetScannerCapabilities, CreateScanRequest, CancelScanRequest, GetCurrentScanJobInfo – and values and their correspond datatypes applicable to each operation.

But, it can be noted that number of operations, their semantics and parameters are unique to each system. This poses great deal of problem in integration of disparate systems, as developers have to consider WSDLs of every other system that is to be integrated. Though SOAP allows for Service Oriented Architecture (where domain specific services are exposed via http web services), non uniformity among web services is rather limiting.

For example, consider the SOAP based services to work on Amazon S3 buckets and their individual objects , we can notice an explosion of operations to be considered. Also, though SOAP web services use HTTP protocol as transport mechanism, they use only POST http method. So, we are not taking advantage of idempotence and cacheabilty of GET http method and partial update semantics of PUT method.

Bucket Webservices

  • ListAllMyBuckets
  • CreateBucket
  • DeleteBucket
  • ListBucket
  • GetBucketAccessControlPolicy
  • SetBucketAccessControlPolicy
  • GetBucketLoggingStatus
  • SetBucketLoggingStatus

Object Webservices

  • PutObjectInline
  • PutObject
  • CopyObject
  • GetObject
  • GetObjectExtended
  • DeleteObject
  • GetObjectAccessControlPolicy
  • SetObjectAccessControlPolicy

So, next improvement in web APIs is to use a Resource Oriented API called Representational State Transfer (REST). It is an architectural style that is defined by a specific set of constraints. REST calls for layered client/server systems that employ stateless servers, liberal use of caching on the client, intermediaries, and server, a uniform interface. REST views a distributed system as a huge collection of resources that are individually managed by components. Resources may be added or removed by (remote) applications, and likewise can be retrieved or modified. [3]

There are four key characteristics of what are known as RESTful architectures

  1. Resources are identified through a single naming scheme
  2. All services offer the same interface, consisting of at-most four operations, as shown in Table below
  3. Messages sent to or from a service are fully self-described
  4. After executing an operation at a service, that component forgets everything about the caller (stateless execution)

In REST based APIs, HTTP is used as a complete application protocol that defines the semantics of service behaviour. It usually involves four HTTP methods with the below semantics:

Operation Description
PUT Modify a resource by transferring a new state
GET Retrieve the state of a resource in some representation
DELETE Delete a resource
POST Create a new resource

Usual semantics of REST API is that when you do a POST on a collection of resources(which has a unique URI), a new resource is created in that collection and unique URI is returned to the newly created resource. We can perform a GET on the newly created resource URI to get all its information in some representation. Using PUT on this URI, we can update partial resource(that is, only necessary parts). We can use the DELETE operation on the URI to remove resource from the collection permanently. This application semantics holds good for any resource based services and thus helping clients to integrate disparate systems and also helps us reason about communication across subsystems easily. In REST style, server-side data are made available through representations of data in simple formats. This format is usually JSON or XML but could be anything.

Most of the above mentioned operations on AWS S3 buckets and objects can be easily modelled on only two URIs and four http methods as:

GET, POST, PUT, DELETE /api/buckets ?query_param1=val…
GET, POST, PUT, DELETE /api/buckets/:object_id ?query_param1=val…

A URI can choose support a limited number of http methods and all GET requests are idempotent thus provides for caching in intermediaries and thus greatly improves efficiency.

One of the tenets of RESTFul architecture, that is less widely used is Hypermedia which provides “next available actions” in the response of an API. Roy Fielding in his paradigm defining thesis about REST called this as HATEOAS (Hypermedia as the Engine of Application State). HATEOAS allows us to develop elegant, self discoverable API system. For example, if we develop a HATEOAS API for Multi-functionality Printers, get operation on the URI api/capabilities return printer capabilities like Print, Scan, Fax with link to access these capabilities like /api/capabilities/scan, /api/capabilities/print and /api/capabilities/fax. A GET on /api/capabilities/scan will return links to access scanner capabilities like /api/capabilities/scan/flatbed, /api/capabilities/scan/ auto_doc_feeder and so on.

A HATEOS API [5]

curl http://localhost:8080/spring-security-rest/api/customers

{
  "_embedded": {
    "customerList": [
      {
        "customerId": "10A",
        "customerName": "Jane",
        "companyName": "ABC Company",
        "_links": {
          "self": {
            "href": "http://localhost:8080/spring-security-rest/api/customers/10A"
          },
          "allOrders": {
            "href": "http://localhost:8080/spring-security-rest/api/customers/10A/orders"
          }
        }
      },
      {
        "customerId": "20B",
        "customerName": "Bob",
        "companyName": "XYZ Company",
        "_links": {
          "self": {
            "href": "http://localhost:8080/spring-security-rest/api/customers/20B"
          },
          "allOrders": {
            "href": "http://localhost:8080/spring-security-rest/api/customers/20B/orders"
          }
        }
      },
      {
        "customerId": "30C",
        "customerName": "Tim",
        "companyName": "CKV Company",
        "_links": {
          "self": {
            "href": "http://localhost:8080/spring-security-rest/api/customers/30C"
          }
        }
      }
    ]
  },
  "_links": {
    "self": {
      "href": "http://localhost:8080/spring-security-rest/api/customers"
    }
  }
}

One of the downsides of RESTFul APIs is that client may need to call multiple APIs to get different resources to piece together information needed by the client. When resources are related forming a graph of relations, it becomes extra difficult in RESTFul architecture to express the need to retrieve selective information from the web of relations among resources. A new API style that is gaining currency these days called ‘GraphQL’ mitigates this problem.[6].

GraphQL is basically RPC with a default procedure providing a query language, a little like SQL. You ask for specific resources and specific fields, and it will return that data in the response. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more. GraphQL reduces the number of HTTP requests to retrieve data from multiple resources.

However, endpoint-based API’s are able to utilise the full capabilities of HTTP protocol to cache response data, but GraphQL dispatches queries through POST requests to a single endpoint. So, advantage of out of the box http caching is lost and API developers are needed to device custom caching mechanism themselves.

There are emerging standards for API documentation like OpenAPI pioneered by Swagger. Swagger allows for API design, API documentation, API development, API testing, API mocking, API governance and API monitoring. We can alternative documentation tools like Sphinx, along with its extensions. API documented using Sphinx looks something like below.

Web APIs

Another issue in maintaining resource based APIs like RESTFul APIs is versioning. We have API URI and at some point it will need to be replaced or have new features added while keeping the older APIs still supporting existing clients. There are many solutions to solve this issue, each having its own merits and demerits. One popular approach is to embed version number in the API URIs like /api/v1. But REST purists frown upon this approach as they see it breaking the fundamental concept of REST – evolvability. A resource is meant to be more like a permalink. This permalink (the URL) should never change. But the practical downside to version approach is that pointing v1 and v2 to different servers can be difficult.

This issue of server setup for different version can be resolved by putting version number in in the hostname(or subdomain) like “https://apiv1. example.com/places”. Another approach to API versioning is to put version info in body or query parameters or custom request header or as part of content negotiation. [7]

Overall, Web APIs are the new digital currency using which digital services are sold. Essentially services provided Facebook, Google, Amazon, Salesforce etc are via APIs. So, organisations take great care in defining, documenting and maintaining their Web APIs. Web APIs are public contracts and hence every software engineering due diligence exercised in developing key software systems should be followed for Web APIs also.

References:

[1] http://www.hp.com/ctg/Manual/bpl13210.pdf
[2] https://www.hpl.hp.com/hpjournal/97feb/feb97a9.pdf
[3] Distributed Systems, Third edition by Maarten van Steen and Andrew S. Tanenbaum. ISBN: 978-90-815406-2-9
[4] https://spring.io/understanding/HATEOAS
[5] https://www.baeldung.com/spring-hateoas-tutorial
[6] https://blog.apisyouwonthate.com/understanding-rpc-rest-andgraphql-2f959aadebe7
[7] Phil Sturgeon. “Build APIs You Won’t Hate”.

Read More
Mobile Technologies Opinion

Security Considerations When Developing Enterprise Digital Solutions – Part II

Today, security is a major concern for enterprises across domains than ever before. Last year, malware-based cyberattacks have led to a sales and revenue loss of almost $300 million by leading global enterprises.

In the previous article, we outlined the top security threats that organizations are facing today. In this article, we will dig a little deeper into the topic of security considerations that enterprises should keep in mind while implementing a digital enterprise solution.

Whether you are planning to build an enterprise digital solution in-house or buy one from a third party, here are the points to remember and guidelines to follow before making a decision:

 

Security Guidelines for Implementing an Enterprise Solution

Identification of Gaps in The Current System

Properly analyze the need and pain points that the digital solution will help resolve. If your business can really benefit from an enterprise mobility solution, you can immediately figure it out from the information dissemination flows within the various processes. Unfortunately, this seemingly simple step is often overlooked in the race to stay ahead in the industry.

Scalability of the Digital Solution

Before developing any digital solution, you need to ensure that you have a solid software architecture that is scalable and resilient. Having a robust software architecture exemplifies several important attributes like performance, quality, scalability, maintainability, manageability, and usability.

Making scalability and resilience an integral part of the development will allow the app to sync up with the changing business requirements and the evolving risk landscape. It can also save you from bad customer relations, cost overruns because of the redesign, and revenue loss.

Involving Just the Required Functions

Try not to impose solutions on your frontline staff. Instead, understand how the business will leverage it. Investing in an enterprise mobility solution without completely understanding its usage can leave open gaps in the system and leave the staff clueless about its functions and utility. Therefore, an in-depth understanding of the day-to-day operational activities and how these solutions will enhance routine tasks is crucial while selecting the solution.

Performance of the App

The same effort that you would invest in product conceptualizing and development should be put into testing and quality control for your digital solutions. App testing is paramount if you want your software to perform well. Ask your team to test the application so that there are no loopholes identified during its usage. Needless to say, you would want to ensure that your app performs well to give an enhanced user experience.

Data Management on the Device

Data management on devices is critical for the security of the app and goes beyond locking down the devices. The device data should be encrypted, protected with a password, time-bombed, and even remote-wiped. Some of the other ways to protect data are by granting/denying permission to store a file and building security for data flow.

Evangelizing the Technology

A huge roadblock for the organizations is the inertia towards adopting the technology itself. Most teams set in the traditional operations are often unwilling to shift to a new style of working. The only way out of it is to test the solution on a small set of employees and see how the product works. It is all about gaining the trust of a few people who can then become natural influencers of the product.

Going big-bang with any new technology can backfire and is naturally not a good idea.

Cloud Security

The cloud has provided organizations with a completely new frontier for data storage and access, flexibility, and productivity. But with all this comes a world of security concerns. Ensuring that you follow the best of cloud security can avoid data breaches and malware attacks and keep your organization’s integrity and reputation intact.

Protecting the Source Code

Reviewing source code for vulnerabilities and security loopholes is an important part of the software development process.

Protecting the source code of the app is important for two main reasons:

  • First, it protects the business’ intellectual properly while encouraging digital innovation,
  • Second, it protects the organization and its clients from attempted attacks on the company’s digital solutions.

Use a strong security check over source code by limiting access, auditing the source code periodically and reviewing the codes independently.

Firewalls and Updated Virus Definitions

A firewall protects your computer and digital solutions from being attacked by hackers, viruses, and malware. Having a firewall allows organizations to create online rules for the users and set up access to certain websites and domains, enabling them to control the way in which employees use the network. Some of the ways a firewall controls online activities include:

  • Packet filtering: analyzing and distributing a small amount of data based on the filter’s standards.
  • Proxy service: saving online information and sending to the requesting system.
  • Stateful inspection: matching specific details of a data packet with a reliable database.

Use SSL Certificate Links

Certificate underpinning means using an SSL certificate to verify users. An SSL certificate links an organization’s details with an encrypted key to allow users to connect securely to the application’s servers. There are several organizations that do not use certificate underpinning, which makes them susceptible to cyber attacks.

Using Complex Passwords and Encryption

Needless to say, having weak security measures is as good as having no security standards. Organizations are always advised to use strong passwords and change them frequently to avoid security breaches. Using end-to-end encryption ensures that user data is secured and is at the least risk of being compromised or jeopardized.

Looking out for Impersonating Solutions

Another security aspect of digital enterprise solutions is the existence of impersonating solutions. These impersonating solutions are malware that creates almost a replica of legitimate copies to fool users into downloading them.

Once downloaded, this fictitious software can harm the system in several ways, from remotely accessing devices and stealing information to bombarding users with pop-ups and advertisements. In any case, whenever an organization’s security is compromised, it is always the user’s data that is at the risk of being exploited.

App Distribution

Once you’ve created an in-house app, the challenge will be in distributing it. Enterprise apps can either be distributed in-house or can be provided through various operating systems. However, the job is not as easy as it sounds.

While a private app is not intended for distribution through an App Store, there are several ways they can be distributed outside it, including iOS app ad hoc, through Xcode or Apple Configurator 2. You can also sign up for Apple’s Enterprise Deployment Program for app distribution.

Final Thoughts

Cybersecurity has to be one of the priorities of organizations when developing enterprise digital solutions. But with this, you need to understand how to test your solution’s security to safeguard your organization. The security considerations mentioned above are not necessarily the only thing to keep in mind while developing the solutions, but they are definitely a good place, to begin with.

After all, if your business isn’t already digital, it soon will be. To prepare for that, you need to offer the most secure digital experiences to your clients, employees, and business partners, irrespective of their location or the devices they use.

Read More
Mobile Technologies Opinion

Security Considerations When Developing Enterprise Digital Solutions – Part I

Enterprise digital solutions can be really beneficial for your organization, but they can be equally detrimental if security isn’t your top concern.

We live in a digital age where information is sacrosanct. While digital transformation is forcing business leaders to either disrupt or be disrupted, it is also leaving the doors open for data breaches and cyber threats. With data privacy taking the center stage in 2019, organizations cannot let vital information slip between the cracks in the coming years. Data theft or loss can cost organizations millions, whether it is direct business losses, audit, and regulatory fines, compliance remediation costs, or most importantly — the loss of client trust, reputation, and brand equity.

With so much on the line, businesses need to devise a robust security infrastructure while developing enterprise digital solutions.

Several organizations around the world are spending hundreds and thousands of dollars on data security.

But is that enough in the digital world?

Cybersecurity is the main focus of several organizations that rely on advanced technologies such as cloud computing, business intelligence, Internet of Things, machine learning, etc. For such organizations, threats of ransomware and distributed denial of service attacks are just the start of a long journey towards digital transformation.

Let us look at some of the top security threats that organizations are facing today.

Enterprise Security

Lack of a Complete Enterprise Security Architecture Framework

It is a known fact that enterprise security architecture is a methodology that addresses security concerns at each step. But more than often, the current enterprise security architectures are not that evolved, if not completely absent.

Uncontrolled Cloud Expansion

The frenzied pace at which businesses are adopting the cloud as a key part of their digital transformation has raised several eyebrows in the last few years. While businesses remain undeterred in adopting the cloud, there is a growing need to create and implement resilient security to support this rapid adoption. Today, protecting data in the cloud environment and supporting the cloud’s native security capabilities are critical.

Network Security

With the drastic increase in cyber-espionage groups trying to compromise vulnerabilities in routers and other devices, network security is also causing sleepless nights for network managers across organizations. The continuous evolution and escalation of threats and vulnerabilities make it a concern that is here to stay for long.

Security, which was once a tiny component of any organization, has gradually evolved into a significant function that will determine any organization’s success. Rising security threats in today’s world have emerged due to new age digital technologies. Security and risk management leaders are currently tasked to safeguard their organizations from cyber attacks with tighter regulations. Security breaches can disrupt the business model of organizations and jeopardize their reputation almost overnight.

The cost of breaches and security compromises can be in millions and result in reputation damage almost immediately. According to the 13th annual Cost of a Data Breach study by Ponemon Institute and IBM, the global average cost of a data breach has climbed 6.4 percent over the previous year to $3.86 million. With these numbers expected to rise in the coming years, organizations around the world cannot afford to ignore a robust IT security system amidst rising cyber attacks and tight regulatory requirements.

Today, there are merely two types of businesses left – the ones who have experienced cyber attacks or security breach and others who are highly likely to experience it in the near future.

It is really not a question of if, but only a matter of when!

Therefore, having a robust digital solution has become an imperative that just cannot wait.

In the next article, we will talk about the security guidelines that enterprises will need to consider before implementing a digital enterprise solution. Stay tuned.

Read More
Mobile Technologies Opinion

Software 2.0 – A Paradigm Shift

All these years, software is largely developed in an imperative way. It is characterised by programmers writing explicit instructions to computers to perform tasks in languages like C++, Python, Java etc. Thus, computation provides a framework for dealing precisely with notions of ‘how to’. This imperative style concerned with ‘how to’ contrasts with declarative descriptions concerned with ‘what is’, usually employed in mathematics[1].

With the advent of neural network based deep learning techniques, computing is moving towards a new declarative paradigm, sometimes dubbed as ’Software 2.0’ with earlier imperative software development paradigm being called Software 1.0 retrospectively[2].

In this new paradigm, we will not spell out the steps of the algorithm. Instead, we specify some goal on the intended behaviour of desired program ( Like Recognising emotions in images, win a game of Go, identify spams in e-mails so on), write a skeleton of neural network architecture, throw all computational resource at our disposal and crank the deep learning machine. Lo and Behold! we will end up with a model which will provide results for future datasets. Surprisingly, large portion of real world problems like – Visual Recognition, Speech recognition, Speech synthesis, Machine Translation, Games like Chess and Go – are amenable to be solved in this way. We can collect, curate, massage, clean and label large data sets, train a deep neural network on these datasets to generate a machine learning model and use this model for future use. Here, no algorithm is written explicitly, neural networks are shown enough data of enough variety to come up with a predictive model.

Declarative programming is not entirely new. Relational databases and SQL queries to retrieve information out of databases is an exemplar of declarative specification of computing. Relational database is based on a simple but a very elegant mathematical formalism called ‘Relational Algebra’. Programmers specify what they want from the database via an SQL query and not how to retrieve data from database. Programmers are happily oblivious to the internal organisation of data in the database and algorithms used to optimally retrieve data. Database engines work extra hard to organise data optimally and retrieve data optimally on request via a query.

Another area of declarative computing is functional programming based on the mathematical underpinning called ‘Lambda Calculus’, which predates even digital computers. One of the tenets of functional programming is to use higher order functions, which allows us to compose programs declaratively without getting bogged down by the algorithmic nitty gritty.

For example, consider the below code snippet which extracts the names starting with ‘a’ from a list of names and converts into upper case and creates new list of such transformed names.

// Java 7
List<String> names = Arrays.asList("anna", "bruno","amar", "fido", "alex");

List<String>upperCased = new ArrayList<>();
for (String name : names) {
    if (name.startsWith("a")) {
        upperCased.add(name.toUpperCase());
    }
}

// Javascript
const names = ['anna', 'bruno', 'amar', 'fido', 'alex'];
let upperCased = [];

for (name of names) {
    if (name.startsWith('a')) {
        upperCased.push(name.toUpperCase());
    }
}

Notice how this code involves iterating over the list, checking if the name starts with ‘a’, converting each such name to upper case and adding to the new list. If this operation is quite involved, then it becomes tedious and difficult to reason about the program. We need to follow along the iteration to understand exactly what is happening inside loop body. So, the code is not evident and intent revealing.

On the other hand, consider the same operation performed using higher order functions like map, filter etc.

// Java 8
List<String> upperCased = names.stream()
        .filter(name->name.startsWith("a"))
        .map(String::toUpperCase)
        .collect(Collectors.toList());

// Javascript
upperCased = names.filter(name => name.startsWith('a')).map(name => name.toUpperCase());

Here, if we know the semantics of the operations map and filter, we pretty much know what is being done!. We need to just look up as to what operation is being performed in map and filter. This is quite intent revealing and code is concise too. One of the great advantages of such higher order, declarative program is that compiler also can infer the intent easily and can apply optimisations and transformations like parallelisation.

Renowned computer scientist Erik Meijer reckons this successful conversion of training data into models using machine learning techniques of deep learning as future direction of computing[3].

These machine learning models are essentially pure functions devoid of any side effects and are based on solid mathematical ideas of back propagation and stochastic gradient descent. As we have seen previously, software paradigm based on solid mathematical underpinning is destined to succeed. The new paradigm is like turning the test driven development on its head: in test driven development we device test cases and then write code to satisfy the expectation of those test cases. In software development paradigm based on deep learning, we give machine the test cases(train data) and we produce software which will satisfy these test cases based on neural networks.

There is, however, one fundamental difference between the Software 1.0 code and machine learning model. Code is deterministic and discrete. The output of model is probabilistic, uncertain and continuous. A spam filter would not tell whether a mail is spam or not in discrete boolean terms. But, it will tell its confidence as to the possibility of a mail being spam. However, by providing large amount of carefully curated training dataset to the spam filter, we can improve the accuracy of spam filter to any desired level.

As Dr. Meijer puts it succinctly, the future of programming seems to be combining neural nets with probabilistic programming. Companies like Tesla have already made great strides in this direction.

References:

  1. Hal Abelson’s, Jerry Sussman’s and Julie Sussman’s Structure and Interpretation of Computer Programs (MIT Press, 1984; ISBN 0-262-01077-1)
  2. Software 2.0
  3. Alchemy For the Modern Computer Scientist – Erik Meijer
Read More
Mobile Technologies Tech Talk

Why Python for Machine Learning?

Python is a deceptively simple but very elegant programming language. It is one of the go-to languages in the domain of numeric computing, scientific computing, data science and machine learning. Data wrangling libraries like Pandas, numeric computing libraries like Numpy and scientific computing libraries like Scipy are all written in Python. Also, machine learning libraries like Scikit-learn, Tensorflow, Keras etc., are in Python. Python data visualization libraries like Matplotlib, Seaborn, Bokeh, Plotly etc are also quite famous. All these libraries are open sourced.

Read More
Mobile Technologies Tech Talk

MOOC: ushering an era of auto didacticism

Only the autodidacts are free.”
― Nassim Nicholas Taleb, Author of “Black Swan

nahi jnAnEna sadRushaM pavitraM iha vidyatE”
— Bhagavdgeetha

Software development is one of the professions where its practitioners need to constantly keep themselves up to date with the rapidly changing field. Robert C Martin (Uncle Bob) in his much acclaimed book “The Clean Coder” has this to say:

Read More
Mobile Technologies Tech Talk

What is new in Java 9?

Next major version of Java programming language is scheduled for general availability on 21 September, 2017. It is Java 9 (Java SE 9 Platform, JDK 9). Though this new version does not have as paradigm changing features as Java 5 or Java 8, it has got many interesting additions to the language, which has been the venerable workhorse of software development.

Read More
Mobile Technologies Tech Talk

#TechTalk: Unit testing – better to be safe than sorry

Tech folks will agree that Unit testing and test driven development is a good concept but is seldom practiced. Its usefulness and importance is much eulogised by doyens like Bob Martin, Kent Beck et al.

Read More