Airo Global Software

Think Beyond Future !

Every software developer with little experience understands the value of keeping things simple and stupid (KISS). You don't want to repeat yourself once you've learned how to use classes and functions, so keep things DRY. The goal of all of these principles is to reduce mental complexity in order to make software easier to maintain.

  • Don’t repeat yourself (DRY)

The DRY principle is at the heart of software development. Our code is organized into packages and modules. We eliminate functions. We try to make the code reusable so that we can hopefully maintain it more easily.

Benefits: Reduced complexity. The more code you have, the more maintenance you'll have to do. DRY usually results in less code. This means that for typical changes, you only need to make one adjustment.

Risk: When you do it too often, the code tends to become more complex.

Tooling support: There are programmes that can detect duplicated code. There is one for Python.

pylint --disable=all --enable=similarities src
  • You Ain’t Gonna Need It (YAGNI)

The realization that too much abstraction actually harms maintainability is referred to as YAGNI. I'm looking at you, Java developers!

Benefit: Reduced complexity. The removal of abstractions clarifies how the code works.

Risk: You will have difficulty extending your software if you use YAGNI too much and thus make too few abstractions. Furthermore, junior developers may tamper with the code in an unfavourable way.

Tooling support: None

  • Keep it Simple and Stupid (KISS)

KISS can be applied to a variety of situations. Although some solutions are smart and solve the problem at hand, the dumber solution may be preferable because it has less of a chance of introducing problems. This may occasionally be less DRY.

  • Principle of Least Surprise

Design your systems so that the location of feature implementation, as well as the behaviour and side-effects of a component, are as unsurprising as possible. Keep your coworkers informed.

Benefit: Reduced complexity. You ensure that the system's mental model corresponds to what people naturally assume.

Risk: You may need to break DRY in order to complete this task.

Tooling support: None. However, there are some indications that this was not followed:

You're explaining the same quirks of your system to new colleagues over and over.

You'll have to look up the same topic several times.

You feel compelled to document a topic that is not inherently difficult.

  • Separation of Concerns (SoC)

Every package, module, class, or function should be concerned with only one issue. When you try to do too many things, you end up doing none of them well. In practice, it is most visible in the separation of a data storage layer, a presentation layer, and a layer containing the business logic. Other types of concerns could include input validation, data synchronization, authentication, and so on.

Benefit: Reduced complexity: It's usually easier to see where changes need to be made.

There should be fewer unfavourable side effects to consider.

People can work in parallel without encountering a slew of merge conflicts.

Risk: If you go overboard on SoC, you will almost certainly violate KISS or YAGNI.

Tooling support: Cohesion can be measured by counting how many classes/functions from other packages are used. A large number of externally imported functions may indicate SoC violations. A large number of merge conflicts may also indicate a problem.

  • Fail early, fail loud

As developers, we must deal with a wide range of errors. And it's unclear how to deal with them, especially for beginners.

To fail early is a pattern that has helped me a lot in the past. That is, the error should be recognized very close to the location where it can occur. User input, in particular, should be validated directly in the input layer. However, network interactions are another common scenario in which error cases must be handled.

The other pattern is to fail loudly, which means to throw an exception and log a message. Don't simply return None or NULL. Exceptions should be made. Depending on the type of Exception, you may also want to notify the user.

Benefit: Easier to maintain because it is clear where functionality belongs and how the system should be designed. Errors occur earlier, making debugging easier.

Risk: None.

Tooling support: None

  • Defensive Programming

The term "defensive programming" is derived from the term "defensive driving." Defensive driving is defined as "driving to save lives, time, and money regardless of the circumstances around you or the actions of others." Defensive programming is the concept of remaining robust and correct in the face of changing environmental conditions and the actions of others. This can mean being resistant to incorrect input, such as when using an IBAN field in a database to ensure that the content stored there contains an IBAN. It may also imply making assertions explicit and raising exceptions if those assertions are violated. It may imply making API calls idempotent. It may imply having a high level of test coverage in order to be defensive against future breaking changes.

Three fundamental rules of defensive programming

  • Until proven otherwise, all data is relevant.
  • Unless proven otherwise, all data is tainted.
  • Until proven otherwise, all code is insecure.

"Shit in shit out" is an alternative to defensive programming.

Benefit: Higher robustness

Risk: Increased maintenance as a result of a more complex/lengthy code base

Tooling support: Check your coverage to see how much of your unit tests are covered. Try mutation testing if you want to go crazy. There is chaos engineering for infrastructure. Load testing is done.

SOLID

The SOLID principles provide guidance in the areas of coupling and cohesion. They were designed with object-oriented programming (OOP) in mind, but they can also be applied to abstraction levels other than classes, such as services or functions. Later on, I'll simply refer to those as "components."

Two components can be linked in a variety of ways. For example, one service may require knowledge of how another service operates internally in order to perform its functions. The more component A is dependent on component B, the more A is coupled with B. Please keep in mind that this is an asymmetrical relationship. We don't care about the direction of coupling.

One module's high cohesion indicates that its internal components are tightly linked. They are all about the same thing.

We strive for loose coupling and high cohesion between components.

The principle of single-responsibility

"A class should never change for more than one reason."

The roles of software entities such as services, packages, modules, classes, and functions should be clearly defined. They should typically operate at a single abstraction level and not do too much.

One tool for achieving separation of concerns is single responsibility.

Tooling support:

I'm not aware of any automated tools for detecting violations of the principle of single responsibility. You can, however, try to describe the functionality of the components without using the words "and" or "or." If this does not work, you may be breaking the law.

The open-close principle

"Software entities... should be open to extension but not to modification."

If you modify a component on which others rely, you risk breaking their code.

The substitution principle of Liskov

"Functions that use pointers or references to base classes must be able to use derived class objects without being aware of it."

Benefit: This is a fundamental assumption in OOP. Simply follow it.

Risk: None

The principle of interface segregation

"A number of client-specific interfaces are preferable to a single general-purpose interface."

Benefit: It's easier to extend software and reuse interfaces if you have the option to pick and choose. However, if the software is entirely in-house, I would rather create larger interfaces and split as needed.

Risk: Violation of KISS.

The principle of dependency inversion

"Rely on abstractions rather than concretions."

In some cases, you may want to operate on a broader class of inputs than the one you're currently dealing with. WSGI, JDBC, and basically any plugin system come to mind as examples. You want to define an interface on which you will rely. The components must then implement this interface.

Assume you have a programme that requires access to a relational database. All queries for all types of relational databases could now be implemented. Alternatively, you can specify that the function receives a database connector that supports the JDBC interface.

Benefit: In the long run, this makes the software much easier to maintain because it is clear where functionality is located. It also aids in KISS.

Risk: Overdoing it may result in a violation of KISS. A good rule of thumb is that an interface should be implemented by at least two classes before it is created.

  • Locality Principle

Things that belong together should not be separated. If two sections of code are frequently edited together, try to keep them as close together as possible. At the very least, they should be in the same package, hopefully in the same directory, and possibly in the same file — and if you're lucky, they should be in the same class or directly below each other within the file.

Benefit: Hopefully, this means fewer merge conflicts. When you try to find that other file, you do not need to switch context. When you refactor that piece, you may recall everything that belongs to it.

Risk: Violating loose coupling or concern separation.

Tooling support: So far, none, but I'm considering making one for Python. Essentially, I would examine the git commits.

If you have any doubt about the above topic. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected] enter image description here Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Typescript ended the year with a fantastic 4.1 release. There, the long-awaited Template Literal String and Remapping features were introduced. These opened the door to a lot of new possibilities and patterns.

However, the 4.1 release was only laying the groundwork for Template Literal Types. This feature has matured over the course of the 2021 releases. For example, we can now use Template Literal Types as union discriminates after the 4.5 Release.

There have been four Typescript releases in 2021. They are jam-packed with fantastic features. The core and developer experiences have been vastly improved. In this blog, I will summarise my top 2021 picks. Those are the ones that have the most influence on my daily Typescript.

  • Tuples' Leading Elements

Typescript has long supported the Tuple basic type. It enables us to express a predetermined number of array-type elements.

let arrayOptions: [string, boolean, boolean];

arrayOptions = ['config', true, true]; // works

arrayOptions = [true, 'config', true];

//             ^^^^^  ^^^^^^^^^

// Does not work: incompatible types

function printConfig(data: string) {

 console.log(data);

}

printConfig(arrayOptions[0]);

As part of the tuple, we can define optional elements:

// last 2 elements are optional
let arrayOptions: [string, boolean?, boolean?];
//  A required element cannot follow an optional element
let arrayOptions: [string, boolean?, boolean];

We are implying that our array can have multiple lengths by using the optional modifier. We can even go one step further and define a dynamic length array of the following type:

//  last 2 elements are optional
let arrayOptions: [string, ...boolean[]];
// the below will be all valid
arrayOptions = ['foo', true];
arrayOptions = ['foo', true, true];
arrayOptions = ['foo', true, true, false];

Tuples become more powerful in this new TypeScript. We could previously use the spread operator, but we couldn't specify the last element types.

However, prior to the release of 4.2, the following would be incorrect:

//  An optional element cannot follow a rest element.
let arrayOptions: [string, ...boolean[], number?];

Prior to 4.2, the rest operator had to be the tuple's final element. That is no longer required in the 4.2 release. We can add as many trailing elements as we want without being limited by that constraint. We cannot, however, add an optional element after a spread operator.

//  Prior to 4.2, Error: rest element must be last in a tuple type
let arrayOptions: [string, ...boolean[], number];
// works from 4.2
let arrayOptions: [string, ...boolean[], number];
//  Error: An optional element cannot follow a rest element
let arrayOptions: [string, ...boolean[], number?];

Let’s see more details:

let arrayOptions: [string, ...boolean[], number];
arrayOptions = ['config', 12]; // works
  • Mistakes on Always-Truthful Promise Checks

TypeScript will throw an error when asserting against a promise starting with the 4.3 release, as part of the strictNullChecks configuration.

function fooMethod(promise: Promise) {

   if (promise) {

   // ^^^^^^^^^

   // Error: This condition will always return true since this 'Promise'

   // appears to always be defined.

   // Did you forget to use 'await'

       return 'foo';

   }

   return 'bar';

}

Because the if condition will always be true, the compiler requests that we modify the if statement.

In the config compiler options, there is no additional flag to configure this.

  • Const variables preserve type Guards references.

When asserting an if statement, Typescript will now perform some additional work. If the variable is const or read-only, it will keep its type guard, if it has one go through the below code:

function trim(text: string | null | undefined) {
 const isString = typeof text === "string";
 // prior to 4.4, this const doesn't work as a Type Guard
 if (isString) {
   return text.trim();
   //     ^^^^
   //  Prior to 4.4: Object is possibly 'null' or 'undefined'
   //  Works on 4.4 and onwards
 }
 return text;
}

If the isString variable was of let scope, the preceding code would fail. The Type Guard would be ignored, and the code would fail as shown below:

// isString is instead declared as let
let isString = typeof text === "string";
//  this statement won't work as a Type Guard
if (isString) {
...
}
  • Combining several variables

Type Guard aliases are now smarter and can understand multiple variable combinations.

function concatUppercase(a: string | undefined, b: string | undefined) {
 const bothNonEmpty = a && b;
 //  the Type Guard will work for a and b
 if (bothNonEmpty) {
   //  a and b are of type string
   return `${a.toUpperCase()} ${b.toUpperCase()}`
 }
 return undefined;
}

Both Type Guards are stored in the bothNonEmpty const variable. Within the if statement, a and b are both of the string types. It works transitively When combining variables with Type Guards, those will still be propagated. That means you'll be able to combine as many Type Guards as you want without losing any typing information.

function trim(text: string | null | undefined) {
 const isString = typeof text === "string";
 // prior to 4.4, this const doesn't work as a Type Guard
 if (isString) {
   return text.trim();
   //     ^^^^
   //  Prior to 4.4: Object is possibly 'null' or 'undefined'
   // Works on 4.4 and onwards
 }
 return text;
}

In the preceding example, we can see that both the NonEmpty property and the Type Guard information are retained.

In conclusion, the Control Flow Analysis has been greatly enhanced. The best part is that it will work right away in Typescript 4.4.

  • Specific Optional Property Types

When working with Typescript, there is a recurring debate: should a property be made optional or undefined? It all comes down to personal preference.

So, what's the issue? The Typescript compiler treats both equally. As a result, there is some inconsistency in the code and some friction.

Let’s see an example:

interface User {
 nickName: string;
 email?: string;
// is considered equivalent to
interface User {
 nickName: string;
 email: string | undefined;
}

To put a stop to this inconsistency, Typescript now has a flag called —exactOptionalPropertyTypes. When enabled, it will generate an error if you attempt to treat an optional value as nullable and vice versa. Consider the following code with —exactOptionalPropertyTypes set to true:

interface User {
 nickName: string;
 email?: string;
}
//  Error: Type 'undefined' is not assignable to type 'string'
const user1: User = {
 nickName: 'dioxmio',
 email: undefined
}
//  Works fine, email is optional
const user2: User = {
 nickName: 'max',
}

The code above would be fine if the —exactOptionalPropertyTypes option was not enabled.

To avoid any unintended consequences, it is disabled by default. It is up to us to decide whether or not it is a feature worth having.

  • Symbol Index Signatures and Template Literal Strings

In index signatures, Typescript 4.4 now supports symbol, union, and template literal strings. Unions are allowed as long as they are made up of a string, number, or symbol.

An example using the symbol:

interface Log {
 // symbols are now supported and key type
 [x: symbol]: string;
}
const warn = Symbol('warn');
const error = Symbol('error');
const debug = Symbol('debug');
const log: Log = {};
log[warn] = 'A warning has occurred in line X';

Code using

literal template string:
interface Transaction {
 // template literal string
 [x: `amex-${string}`]: string;
}
const log: Transaction = {};
log['amex-123456'] = '$120';

We can reduce a lot of boilerplate by being able to use unions. We can express our interfaces and types more clearly.

// unions, template literal string and symbols are now supported
interface Foo {
 [x: string | number | symbol | `${string}-id`]: string;
}
// the code above is Equivalent to
interface Foo {
 [x: string]: string;
 [x: number]: string;
 [x: symbol]: string;
 [x: `${string}-id`]: string;
}

The index signatures aren't perfect yet. They have constraints. They continue to lack support for Generic Types and Template Literal Types:

type dice = 1 | 2 | 3 | 4 | 5 | 6;
interface RecordItem {
 //  generics are not supported
 [x: K]: V;
 //  template literal types are not supported
 [x: `${dice}x${dice}`]: string;
}

Nonetheless, this is a fantastic feature addition that will allow us to create more powerful interfaces with fewer lines of code.

  • The Desired Kind

Prior to 4.5, we had to use the infer functionality, as shown below, to determine the return type of a Promise:

type Unwrap = T extends PromiseLike ? U: T;
const resultPromise = Promise.resolve(true);
//  resultUnwrapType is boolean
type resultUnwrapType = Unwrap;

A new type Awaited is included in the 4.5 release. We don't need a custom mapped type like the one described by Unwarp above.

Syntax:

type Result = Awaited;

Use case examples:

/ type is a string
type basic = Awaited>;
//  type is string
type recursive = Awaited>>;
//  type is boolean
type nonThenObj = Awaited;
// type is string | Date
type unions = Awaited>>;
type FakePromise = { then: () => string };
//  type is never
type fake = Awaited;
  • Type-Only Import Parameters

This instructs the TypeScript compiler that this import only contains TypeScript types. What difference does it make? When converting the code to JavaScript, the TSC can safely stripe that import.

//  importing the FC type from React
import type { FC } from 'react';
import { useEffect } from 'react';

As you can see above, the issue is that if you want to be specific about your import types, you must sometimes import statements. You can continue to do the following:

import { FC, useEffect } from 'react';

However, you are sacrificing some readability. You can mix them together starting with version 4.5.

import { type FC, useEffect } from 'react';

This clarifies the code without adding any extra boilerplate.

Conclusion

Typescript has grown in popularity over the years. We can anticipate Typescript becoming the default language for JavaScript-based projects in the near future. In 2021, Typescript has greatly improved. Its core has become smarter, allowing us to rely more on inference. As a result, it is less intrusive and easier to transition from JavaScript code.

The year 2022 is also looking exciting. There are some cool elements on the horizon. If you have any doubt about the best TypeScript features. Don’t hesitate to contact us. Airo Global Softwarewill be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Before we begin, a basic understanding of JavaScript and object-oriented programming is required. I'll try to be as thorough as possible, but I won't go through the basics of JavaScript again.

  • User data is retrieved via an API, then formatted and displayed.

To get started, clone this GitHub repository. It includes a simple React app that shows a list of ten employees. Each employee has a first and last name, as well as an email address, a photo, and the date of registration. The employee data is fetched in a JSON file when the React application is launched. The file public/users.json is located in the public folder.

This React application's framework is quite standard. The following are the three primary folders:

  • Both conventional components, such as the Date and Email components, and layout components, such as the Header and the Main wrapper, are found in the components folder.
  • The Home component is located in the pages folder. We'll have a lot of page components in a bigger project.
  • Both the JSON mock file and the randomuser API are accessed through the services folder.

The pages/Home component, as well as the React Query setup, are used in the App.js file.

The data synchronization aspect of the project is done with React-query. This library is based on the react-apollo library and enables for declarative creation of HTTP requests. Although I do not believe the documentation is always clear, it is really useful and readable.

You must wrap your entire React application in the QueryClientProvider component to make it work:

function App() {

   return (

       < QueryClientProvider client={queryClient}>

           < HomePage />

       < /QueryClientProvider>

   )

}

Then, as in the case of the pages/Home component, you can utilise the useQuery hooks:

const Page = () => {
    const { isLoading, error, data } = useQuery(
        'users',
        () => get(),
        {
            refetchOnWindowFocus: false
        }
        )

    if (isLoading) return 
Loading...
if (error) return
An error occurs...
return ( < Body> < Main> < Header> < PageTitle text='Students' /> < /Header> { data.map(user => ) } < /Main> < /Body> ) }

The data is presented in the UserCard component once the promise is resolved. This component is supported by a number of additional components. For example, the Image component shows the user's photo, the Name component shows the user's name and the Email component shows the user's email address. Drilling props are used to transfer data from top to bottom components.

const Component = ({ user }) => (

   < Wrapper>

       < UserImage

           firstName={user.first_name}

           lastName={user.last_name}

           picture={user.picture}

       />

       < UserName

           firstName={user.first_name}

           lastName={user.last_name}

       />

       < UserJoinedDate date={user.registered_date} />

       < UserEmail email={user.email} />

   < /Wrapper>

)

For the time being, our programme relies on the JSON file in the public folder, which works perfectly. But now it's time to make things a little more complicated: instead of using the JSON file, we'll use the Random User Generator API. Refresh the application after changing getMockData to getApiData in the get method of the services/Api/index.js file.

async function getMockData() {

   return await fetch('http://localhost:3000/users.json')

       .then(res => res.json())
       .then(({ data }) => data)
}
async function getApiData() {

   return await fetch('https://randomuser.me/api/?results=10')

       .then(res => res.json())

       .then(({ results}) => results)

}
async function get() {

   return await getMockData() // change this to getApiData

}

export default get

The application is now broken, and you'll need to modify all of the props to make it function again. It could be acceptable in a one-page application, and you could do it, but picture having to do it in a ten- or twenty-page application: it won't be pleasant at all. You'll learn nothing and possibly introduce some bugs. The function pattern comes very handily in this situation.

  • Using the Constructor Pattern, create a model for our user data.

The function pattern is frequently the first design pattern I teach new developers. It's a wonderful introduction to design patterns because it's directly applicable and doesn't rely on abstraction. It's simple to grasp, can be done on the front end, and is also simple to utilize.

When I started learning Java and later Php, I first heard about it. You may have already learned this notion if you are familiar with these languages. Do you know what the names POJO and POPO mean? POJO and POPO stand for Plain Old Java Object and Plain Old PHP Object, respectively. We also refer to them as Entities. We can use them to encapsulate and store data most of the time.

Here's an example of how the POPO interface can be used to generate an object's plan:

interface UserInterface {
  public function getName();
  public function setName($name);
}
class User implements UserInterface {
  public $firstName;
  public $lastName;
  public function getName()
  {
      // ...
  }
  public function setName($value) {
      // ...
  }
}

Things aren't always the same in JavaScript as they are in other programming languages. This is due to the fact that JavaScript is a prototypical object-oriented language rather than a class-based language, and it is also not a completely object-oriented language. Enumerations and interfaces, for example, do not exist in JavaScript. For example, we can imitate enumerations with Object. freeze, but this is not the same as using the enum keyword.

Returning to the function pattern, there are two ways to implement it: using a function or a class/prototype. Because you construct a prototype when you use the class keyword in the back, the terms class and prototype are interchangeable.

Here's an example with a class:

class User {

constructor(firstName, lastName, age) {

  this._firstName = firstName

  this._lastName = lastName

  this._age = age

}
get firstName() {

  return this._firstName

}
get lastName() {

  return this._lastName
}
get age() {

  return this._age
}
displayUserInfo() {

  console.log(`Here are the information I have on this user: ${this._firstName}, ${this._lastName}, ${this._age}`)
}
}
const MyFirstUser = new User('Thomas', 'Dimnet', 33)

const MySecondUser = new User('Alexandra', 'Corbelli', 30)

MyFirstUser.displayUserInfo()

MySecondUser.displayUserInfo() 

And below it with a function:

function User(firstName, lastName, age) {

this._firstName = firstName

this._lastName = lastName

this._age = age
this.firstName = function() {
  return this._firstName

}

this.lastName = function() {

  return this._lastName

}
this.age = function() {

  return this._age

}
this.displayUserInfo = function() {

  console.log(`Here are the information I have on this user: ${this._firstName}, ${this._lastName}, ${this._age}`)

}

}

const MyFirstUser = new User('Thomas', 'Dimnet', 33)

const MySecondUser = new User('Alexandra', 'Corbelli', 30)

MyFirstUser.displayUserInfo()

MySecondUser.displayUserInfo()

I like to work with the class keyword most of the time since I believe it is more legible and clear. We already know what the class's getters and setters are after a closer look. Feel free to use any of these, but I'll be using the class version for the rest of the blog.

One of my favourite aspects of the function Object pattern is its ability to store both raw and parsed data. Assume you receive a timestamp date from an API and need to display it in two formats: "YYYY-MM-DD" and "DD-MM-YYYYY." Here's an example of how you can use the function Object pattern.

import moment from "moment"

class Movie {
   constructor(date) {

       this._date = date

   }

   get date() {

       return this._date
   }

   get dateV1 {
       return moment(this._date).format("YYYY-MM-DD")
   }

   get dateV2 {
       return moment(this._date).format("DD-MM-YYYY")

   }

}

By the way, the function pattern isn't just for formatting objects: you can use it for any type of object creation. Many jQuery effects, for example, employ the function Object pattern.

Before we begin implementing the solution, keep in mind the main disadvantage of this pattern: it can be memory-intensive. Although our computers and phones now have a lot of memory, it's always important to remember that our software and applications need to be optimized. I recommend that you only use functions when they are required.

Change from mocked data to API data without a hitch.

You can now switch to the with-constructor-pattern branch from the current one. There are two function Object patterns in this branch: src/models/MockedUser.js and src/models/ApiUser.js. The first employs hard-coded JSON data, while the second employs data from the Random User Generator API. The data displayed is currently coming from a JSON file.

The MockedUser object looks like this:

import moment from "moment"

class User {

   constructor(data) {

       this._id = data.id

       this._firstName = data.first_name

       this._lastName = data.last_name

       this._email = data.email

       this._picture = data.picture

       this._registeredDate = data.registered_date

   }
   get id() {

       return this._id
   }
   get fullName() {

       return `${this._firstName} ${this._lastName}`
   }
   get email() {
       return this._email
   }
   get picture() {
       return this._picture
   }
   get registeredDate() {

       return moment(this._registeredDate).format('MM/DD/YY')

   }

}

export default User

This is a simple JavaScript class that contains all of the user's necessary properties: an email, a photo, and a registration date. Rather than using raw JSON data, we now use this template throughout our code. It provides us with a single source of truth. If we want to add a new property or if the data changes, such as the last login, we can do so here.

Despite the addition of these two new objects, the only change to the code is in src/pages/Home/index.js:

This is a simple JavaScript class that contains all of the user's necessary properties: an email, a photo, and a registration date. Rather than using raw JSON data, we now use this template throughout our code. It provides us with a single source of truth. If we want to add a new property or if the data changes, such as the last login, we can do so here.

Despite the addition of these two new objects, the only change to the code is in src/pages/Home/index.js:

{
 data
   .map(user => new MockedUser(user)) // this is where we do the change
   .map(user => )
}

We now use the data stored in the MockedUser object instead of the raw JSON data. Assume we want to use the data from the RandomUser API. We only need to make two changes. To begin, modify the get function in

src/services/Api/index.js. We'll now use actual data.

async function get() {
   return await getApiData() // Instead of getMockData
}

Then, in your browser, replace the MockedUser function with the ApiUser function :

With only two changes, you can now use API data instead of JSON data and keep the project running! Furthermore, you understand what properties are required for user data and can easily add new ones or modify existing ones! I hope you enjoy this blog about JavaScript design patterns. If you're reading it for the first time, I'm delighted to have been your guide. Please feel free to ask any questions you may have. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile:
www.linkedin.com/in/johnsontaugustine/

Apple is well-known for its ease of use. With the introduction of the new Audio Graphs feature in iOS 15, iOS became even more accessible for visually impaired users.

This blog will teach you everything you need to know to start using Audio Graphs in your iOS app. We'll go over what Audio Graphs are and how we can incorporate them into our apps, as well as how to define axes and data points.

By the end of this blog, you'll be prepared to make your app more accessible than ever before and assist more people in using it.

According to Apple's session Bring accessibility to charts in your app.

What are Audio Graphs?

We will not look at how to create and use graphs in this blog. There are numerous effective methods for presenting charts and graphs to users, ranging from simple self-build solutions to ready-to-use frameworks such as charts. Instead, we'll work with a sample set of data points and concentrate on adding the Audio Graphs feature.

The displayed page also includes additional information about the data series, such as a summary, features, and statistics. All of this information will be read aloud to the user by VoiceOver.

Implementing Audio Graphs

The first step is to make the view controller displaying the chart conform to the AXChart protocol. This protocol is defined as follows: There is only one requirement: the property varies accessibility.

AXChartDescriptor, A descriptor of this type contains all of the information required to display the Audio Graph page, such as the title, summary, and series. A chart descriptor is made up of additional descriptors for the axes and data points. Let's take a closer look at these classes before combining them to make an AXChartDescriptor.

Describing the Axes

AXNumericDataAxisDescriptor and AXCategoricalDataAxisDescriptor are the two types of axis descriptors. Both implement the same AXDataAxisDescriptor basis protocol, which cannot be used directly. Both types of descriptors can be used on the x-axis. However, only a numeric descriptor can be used to define the y axis. This makes sense because the graph's points can only be numbers, whereas the x values can be both points and categories. Let's begin by making an x-axis, which can be done as follows:

private var xAxis: AXNumericDataAxisDescriptor {

   // 1

   AXNumericDataAxisDescriptor(

       title: "The x axis",

       // 2

       range: (0...9),

       // 3

       gridlinePositions: [],

       // 4

       valueDescriptionProvider: { (value: Double) -> String in

           "\(value)"
      }
   )
}
  • For the time being, we'll create a numerical axis with an example title. For a real-world app, you should use a more descriptive title, as this is what VoiceOver will read.
  • An axis must also be aware of its range. We'll create 10 data points in a later step, so the range in this example is (0...9). When creating your points based on real-world data, you can specify the number of values to display in the graph.
  • Finally, we can pass in an array of points to display grid lines. However, regardless of the values entered, this appears to have no effect on the created Audio Graph detail page.

Please let me know if you have any additional information about this property in the comments!

  • Finally, an axis must understand how to convert data points into strings that can be read by the user. This is accomplished by providing a closure that converts a Double value to a String. We just embed the value in a string in this case, but it could also be used for matters or other transformations.

The y axis is created in the same manner as the x-axis. It also requires a title, range, and gridline. Positions and with DescriptionProvider:

private var yAxis: AXNumericDataAxisDescriptor {

   AXNumericDataAxisDescriptor(

       title: "The y axis",

       range: (0...9),

       gridlinePositions: [],

       valueDescriptionProvider: { (value: Double) -> String in

           "\(value)"

       }
   )
    )

Describing the Data Points

The graph points are encapsulated in an AXDataSeriesDescriptor, which represents a single data series. An Audio Graph can have multiple data series, but for the time being, we'll only use one. An AXDataSeriesDescriptor is made up of a name, a boolean flag indicating whether or not the data series is continuous, and an array of AXDataPoint objects representing the actual points. A point has an x-axis value called xValue at all times. The y axis value, yValue, is optional. A point can also have a label to give the data point a name, as well as additionalValues, which can be numerical or categorical values for this data point. Given some example values, here's how to make an AXDataSeriesDesciptor:

private var series: [AXDataSeriesDescriptor] {

   // 1

   let yValuesSeries = [4.0, 5.0, 6.0, 3.0, 2.0, 1.0, 1.0, 3.0, 6.0, 9.0]

   let dataPointsSeries = yValuesSeries.enumerated().map { index, yValue in

       AXDataPoint(x: Double(index), y: yValue)
   }

   // 2

   return [

       AXDataSeriesDescriptor(

           name: "Data Series 1",

           isContinuous: true,

           dataPoints: dataPointsSeries

       )
   ]
}
  • For the y axis, we create data points with an array of values. The value of the x-axis corresponds to the index of a number in the array.

  • We then use the array of AXDataPoint objects we just created to wrap them in an AXDataSeriesDescriptor. We use true for isContinuous to display a single coherent graph and an empty string as the name for this data series.

  • To display all points separately, use false for isContinuous. Check it out for yourself or wait for the next section, where we'll go over more options in depth.

Putting all Descriptors together

We've made two descriptors for the axes and one for a data series. We are now ready to combine them to form one AXChartDescriptor. Here's how we can go about it:

// 1
var accessibilityChartDescriptor: AXChartDescriptor? {

   // 2

   get {

       AXChartDescriptor(

           title: "Example Graph",

           summary: "This graph shows example data.",

           xAxis: xAxis,

           yAxis: yAxis,

           series: series

       )

   }

   // 3
   set { }
}
  • As previously stated, in order to implement the AXChart protocol, we must provide the property accessibility ChartDescriptor of type AXChartDescriptor.
  • To do so, we specify a title and a summary that will be displayed and read to the user on the Audio Graphs detail page. We also pass in the axis and data series descriptors that we created earlier.
  • We leave the setter empty because this property will never be set from anywhere else.

Using Audio Graphs

Let's take a look at our audio graph in action. It can be intimidating to use VoiceOver if you are not used to it. A double or triple tap on the iPhone's back is the best way to enable or disable it. Scroll down to Back Tap in Settings > Accessibility > Touch. A variety of actions can be defined here to be triggered by a double or triple back tap.

Next, launch your app and navigate to the graph for which you have enabled the Audio Graph feature. Enable VoiceOver and swipe until the graph is selected.

If you're not sure how to use VoiceOver, you can consult Apple's VoiceOver gesture guide or raywenderlich.com's iOS Accessibility: Getting Started.

Let's take a look at our audio graph in action.

It can be intimidating to use VoiceOver if you are not used to it. A double or triple tap on the iPhone's back is the best way to enable or disable it. Scroll down to Back Tap in Settings > Accessibility > Touch. A variety of actions can be defined here to be triggered by a double or triple back tap.

Next, launch your app and navigate to the graph for which you have enabled the Audio Graph feature. Enable VoiceOver and swipe until the graph is selected.

If you're not sure how to use VoiceOver, you can consult Apple's VoiceOver gesture guide or raywenderlich.com's iOS Accessibility: Getting Started.

Open the Audio Graph detail page — this is the result of our efforts!

Swipe right until the Play button appears, then double-tap to listen to the Audio Graph. It's easy to see (or hear) why this new feature improves graph accessibility for visually impaired users so much.

Where To Go From Here

As demonstrated in this tutorial, Apple made it very simple to add audio representations to existing graphs. All you have to do is wrap your data points in an AxDataSeriesDescriptor and add some metadata.

In the following section, we'll look at how adaptable they are. We'll go over various types of axes and show more than one data series. This section will be published next week, so stay tuned for more information!

Audio Graphs can help you make your apps more accessible to a wider audience. This will provide your users with a better experience.

If you have questions or remarks, please let us know in the given email below. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Equatable, Comparable, Identifiable, and Hashable solutions

Protocols are not new to iOS or its cousin OS X; in fact, the delegate protocol is the bread and butter of more than half of the frameworks, though this may change in the coming years with the introduction of async/await. Having said that, since SwiftUI's release in 2019, protocols appear to be changing their colors in the meantime.

That is because SwiftUI includes a number of mandatory protocols that are linked to the language itself. Although it is not always clear what is going on, basic protocols such as Equatable, Comparable, Identifiable, and Hashable are used.

Identifiable

This is the first protocol you'll likely encounter as a new SwiftUI coder when attempting to define a ForEach loop, for example, within a List — assuming we have an array of dice containing a custom struct.

struct ContentView: View {
 @State var dice = [Dice]()
 var body: some View {
   ForEach(dice) {
     Text(String($0.value))
   }
 }
}

The compiler is looking for a way to uniquely identify each row within your struct's loop. The dice variable shown here must be Identifiable. Conformance is obtained through the use of code such as this.

struct Dice: Identifiable {
 //  var id = UUID()
 var id = Date().timeIntervalSince1970 // epoch [dies Jan 19, 2038]
 var value: Int!
}

Hashable

The second protocol you're likely to encounter is Hashable, which SwiftUI requires for loops like the one shown here.

ForEach(dice, id: \.self) { die in
 Text("Die: \(die.value)")
}

But be careful, because including a third protocol Equatable with a definition shown will cause your code to crash.

struct Dice: Equatable, Hashable {
 var id = UUID()
 var value: Int!
 static func ==(lhs: Dice, rhs: Dice) -> Bool {
   lhs.value == rhs.value
 }
}

The hashable needs here necessitate the use of a unique identifier, similar to the identifiable protocol.

To use both the Hashable and the Equatable protocols, you must instruct the Hashable protocol to focus on the id, which is, of course, that unique Identifiable value.

extension Dice: Hashable {
 static func ==(lhs: Dice, rhs: Dice) -> Bool {
   lhs.id == rhs.id
 }
 func hash(into hasher: inout Hasher) {
   hasher.combine(id)
 }
}

However, the hash function can also be useful in this case because it guarantees that it will produce the same output given the same input. Although this example may appear to be a little pointless, you can use code like this to generate the same key repeatedly.

.onAppear {
 var hash = Hasher()
 hash.combine(die.id)
 print("hash \(hash.finalize()) \(die.hashValue)")
}

The main page at Apple provides a more real-world example of how this protocol can be used.

Comparable

Comparable, which appears to be nearly identical to Equatable, is the next protocol on my shortlist.

extension Dice: Comparable {
 static func < (lhs: Dice, rhs: Dice) -> Bool {
   lhs.value < rhs.value
 }
}

This code was added to our SwiftUI interface to enable us to use the new protocol/property.

if dice.count == 2 {
 if dice.first! > dice.last! {
   Text("Winner 1st")
 } else {
   Text("Winner 2nd")
 }
}

However, there is a catch. I can't use the == in the same way because I had to point to the id to conform to the Hashable protocol.

if dice.first! == dice.last! {
 Text("Unequal \(dice.hashValue)")
} else {
 Text("Equal \(dice.hashValue)")
}

To get around/fix this, I'll need to hire a new operator. The fix for the above necessitates the creation of a new infix operator, such as ==== Obviously, I'd need to change the code snippet above to use the ==== instead of the == shown.

infix operator ==== : DefaultPrecedence
extension Dice {
 static func ====(lhs: Dice, rhs: Dice) -> Bool {
   lhs.face == rhs.face
 }
}

I'm sure Apple would prefer that you use protocols in your everyday code to make it clear what you're trying to accomplish, essentially an extension of types that you can use on your custom objects.

Equatable

Okay, I admit that the infix operator isn't for everyone, especially Swift purists. So here's an alternative that's more equitable and doesn't require an infix. Within it, I define the view as adhering to the equatable protocol in order to target the face of my die.

Please take note of what I used.

onAppear to initialize die1 and die2, and then.onChange to handle all subsequent reloads of the dice whenever I rolled a new pair.

struct EqualView: View, Equatable {

 static func == (lhs: EqualView, rhs: EqualView) -> Bool {
   lhs.die1?.face == rhs.die2?.face
 }
 @State var die1:Dice? = nil
 @State var die2:Dice? = nil
 @Binding var dice:[Dice]
 var body: some View {
   Color.clear
     .frame(width: 0, height: 0, alignment: .center)
     .onAppear {
       die1 = dice.first!
       die2 = dice.last!
     }
     .onChange(of: dice) { values in
       die1 = dice.first!
       die2 = dice.last!
     }
   if die1?.face == die2?.face {
     Text("Equal ")
   } else {
     Text("Unequal ")
   }
 }
}

This is all about the swift protocols that are commonly used, hope you all understand this topic. If you have any doubt about the Swift protocols used in Swift UI. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

How to Use Git in Android Studio?

- Posted in Git by

Git should be integrated into the project.

Check to see if Git is set up.

Navigate to Android Studio > Preferences > Version Control > Git. To ensure that Git is properly configured in Android Studio, click Test.

Allow integration of version control

Assume you've just started a new Android project called MyApplication. Go to VCS > Enable Version Control Integration in Android Studio. If it has previously been integrated with a version control system, this option will be hidden.

Then, as the version control system, select Git.

A default local master branch will be created if VCS is successfully enabled.

To exclude files from Git, add. gitignore.

Two. gitignore files are automatically added when you create a new Android project in Android Studio (one in the project root folder, and the app folder). Git should not contain files such as generated code, binary files (executables, APKs), or local configuration files. Version control should be disabled for those files. Here is the content of my first. gitignore file:

# content of .gitignore
*.iml
.gradle
/local.properties
/.idea/*
.DS_Store
/build
/captures
.externalNativeBuild   
.cxx

Changes are staged and committed

The project is complete and ready for use with Git version control. Go to VCS > Commit to stage and commit your changes.

You will be presented with a dialogue in which you can examine all files that will be added, enter commit messages, and commit. You can uncheck any files that you do not want to be part of this commit.

When you click commit, a popup alerts you that you haven't yet configured your username or email address. Because they will be attached to your commit message, you should always configure them.

"Set properties globally" is an option. I recommend that you do not check this because doing so will result in every git project on your local machine having the same username/email. You may want to have separate usernames/emails for side projects and company projects.

All done — the entire project has now been pushed to Git.

Configure Remote Connections

Go to VCS > Git > Remote to add the project to the remote repository.

To add a new remote, click "+," then enter your remote URL in the URL box. Your local project is now linked to your remote Github repository. You can use Bitbucket, Gitlab, or any other repository in addition to Github. Changes are being pushed to the remote. Go to VCS > Git > Push to push your local changes to the remote repository. The "Push Commits" popup shows which commit will be pushed to the remote-tracking branch. You may proceed with the push.

Obtain the Changes from the Remote

To download the most recent remote changes, navigate to VSC > Git > Pull.

The popup "Pull Changes" appears. I won't go into detail about the pull strategy; simply use the default> strategy and perform the pull.

Collaborate with Branches

Some consider Git's branching model to be its defining feature, and it undoubtedly distinguishes Git in the VCS community. In this section, I'll show you how to use branches in Android Studio.

Make a new branch.

Navigate to VCS > Git > Branches.

The phrase "Git Branches" appears. It displays all of the local and remote branches, as well as the "New branch" option.

Click "New Branch" and give it the name "feature branch."

The other branching possibilities

Assume you're standing near the feature branch. When you expand the menu by clicking on the master branch, you will see many options:

Let me explain each of them in turn:

Checkout: The master branch.

Checkout As: check out a new branch from master. Checkout master and rebase feature branch onto it.

Compare with current: commits that exist in master but not in feature, and vice versa.

Show Diff with Working Tree: Display the difference between the master and the current working tree.

Checkout and Rebase onto Current: Current should be rebased on

Rebase Current onto Selected: Rebase master on the feature has been chosen.

Merge into Current: combine the master and a feature.

Rename: change the name of the master branch.

Delete: remove the master branch from the tree.

You will select the best option based on your requirements.

Display Log History

Select VCS > Git > Show History from the menu.

The history of the currently open file will be displayed in Android Studio,

You can view the entire log history by clicking on the "Log" tab.

You can filter the history here by branch, user, and date, making it easier to find the commit you're looking for.

If you have any doubts about how to use Git in android studio. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Git Command Line

- Posted in Git by

Git can be used in a variety of ways. Git is compatible with a wide range of command-line tools and graphical user interfaces. All Git commands can only be executed from the Git command line. The commands listed below will assist you in learning how to use Git from the command line.

Basic Git Commands

Here is a list of the most important Git commands that are used on a daily basis.

  • Git Config command
  • Git init command
  • Git clone command
  • Git add command
  • Git commit command
  • Git status command
  • Git push Command
  • Git pull command
  • Git Branch Command
  • Git Merge Command
  • Git log command
  • Git remote command

Let's go over each command in detail.

Git config command

This command sets the user's preferences. The Git config command is the first and most important command on the Git command line. This command specifies the author name and email address that will be associated with your commits. Git configuration is also used in other situations.

Syntax

$ git config --global user.name "ImDwivedi1"
$ git config --global user.email "[email protected]"  

Git Init command

This command generates a local repository.

Syntax

$ git init Demo

The init command will create a new repository from scratch.

Git clone command

This command is used to create a repository copy from an existing URL. If I want to make a local copy of my GitHub repository, I can use this command to create a local copy of that repository in your local directory using the repository URL.

Syntax

$ git clone URL

Git add command

This command adds a file or files to the staging (Index) area.

Syntax

To add one file

$ git add Filename

To add more than one file

$ git add*  

it commit command

The commit command is used in two ways. They are listed below.

Git commit -m

This command causes the head to change. It saves or snapshots the file in the version history and adds a message to it.

Syntax

$ git commit -m " Commit Message" 

Git commit -a

This command commits any files added to the repository with git add, as well as any changes made since then.

Syntax

$ git commit -a 

Git status command

The status command displays the current state of the working directory and staging area. It shows which changes have been staged, which have not, and which files are not being tracked by Git. It provides no information about the committed project history. You must use the git log for this. It also shows which files you've modified and which you still need to add or commit.

Syntax

$ git status

Git push Command

It is used to transfer content from a local repository to a remote repository. The act of transferring commits from your local repository to a remote repository is known as pushing. It's similar to git fetch in that it imports commits to local branches while pushing exports commits to remote branches. The git remote command is used to configure remote branches. Pushing has the potential to overwrite changes, so it should be used with caution.

The git push command can be used in the following ways.

Git push origin master

This command pushes changes from the master branch to your remote repository.

Syntax

Git push -all

This command commits all branches to the server repository.

Syntax

$ git push --all

Git pull command

The pull command is used to get data from GitHub. It downloads and merges changes from the remote server into your working directory.

Syntax

$ git pull URL  

Git Branch Command

This command displays a list of all the branches in the repository.

Syntax

$ git branch 

Git Merge Command

This command is used to merge the history of the specified branch into the current branch.

Syntax

$ git merge BranchName

Git log Command

This command examines the commit history.

Syntax

$ git log

If no argument is passed, the Git log displays the most recent commits first. We can limit the number of log entries displayed by specifying a number, such as -3 to show only the last three entries.

$ git log -3

Git remote Command

The Git Remote command connects your local repository to a remote server. You can use this command to create, view, and delete connections to other repositories. This command does not allow you to access repositories in real-time.

If you have any doubt about the git command line. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/