Airo Global Software

Think Beyond Future !

On StackOverflow, the question "Why Spring is faster than Vert. x?" in its various variations is asked once a month on average. After all, Spring is still by far the most popular JVM framework, and many businesses rely on it. However, the Spring Framework is not well-known for its performance. Vert. x, on the other hand, is regarded as one of the best JVM frameworks. As a result, Vert.x is expected to outperform Spring in any benchmark. That, however, is not the case.

In this blog, I'd like to discuss the various causes of those counterintuitive results, as well as offer some suggestions for how to improve your benchmarking strategy.

To begin, what do we mean when we say a framework or language is "fast"? When it comes to web services, we don't talk about response time, also known as request latency. What we usually mean is a different metric known as throughput. Latency refers to the amount of time it takes to respond to a single request. Throughput refers to how many requests a server can handle in a given amount of time. Typically, within a second.

Let's look at where developers get the idea that Vert. x should be faster than Spring. A popular benchmark for web frameworks powered by TechEmpowered attempts to measure the throughput of various languages, runtimes, and frameworks using a few scenarios. Typically, the Vert.x framework performs admirably in these tests.

In the 20th round, for example, Vert.x is ranked 10th with 572K requests per second, while Spring is ranked 219th with 102K requests per second. This is truly impressive.

However, replicating those impressive results can be difficult at times, hence the title's question.

Let's try to figure out what the main flaws are with the benchmarking strategy.

When I say Spring, I mean the Spring Framework, not Spring WebFlux / Project Reactor, which works in a different way. In addition, I'll assume that the Spring application is running in a Tomcat container.

Vert.x is I/O focused

The Vert. x framework's ingenuity recognized early on that the bottleneck of most real-world applications is waiting for I/O. That is, it makes no difference how well your application is written, how clever the JIT optimizations are, or how cutting-edge the JVM GC is. The majority of the time, your application will be waiting for a response from the database or from a service written in Python or PHP 10 years ago.

Vert. x addresses this issue by placing all I/O work in a queue. Because adding a new task to a queue is not a particularly time-consuming operation, Vert. x can handle hundreds of thousands of them per second.

Of course, this is a very simplified explanation. There are multiple queues, context switches, reactive drivers, and a slew of other interesting features that I won't go into detail about. What I do want you to keep in mind is that Vert. x is designed for I/O.

Let's take a look at how Vert.x performance is typically measured:

app.get("/json").handler(ctx -> {     
   ctx.response().end("Hello, World!");
});

Let's compare the preceding example to the code from the Vert.x benchmark, which still performs quite well, with a throughput of 4M requests per second, but not as well as some other languages and frameworks:

app.get("/json").handler(ctx -> {     
   ctx.response()
       .putHeader(HttpHeaders.SERVER, SERVER)
       .putHeader(HttpHeaders.DATE, date)
       .putHeader(HttpHeaders.CONTENT_TYPE, "application/json")
       .end(Json.encodeToBuffer(new Message("Hello, World!")));
   }
);

Can you spot the distinction? There is almost no I/O in the benchmark that most developers run. There are some because receiving a request and writing a response is still an I/O operation, but not much when compared to interacting with a database or a filesystem.

As a result, the benefit of using a reactive framework like Vert. x is diminished by that test.

Write a benchmark application that does some I/O work, such as writing to a database or reading from a remote service, if you want to see real benefits from a reactive framework like Vert. x.

Benchmarking with Low Concurrency

Concurrency is handled by Spring Framework by allocating a thread pool dedicated to serve incoming requests. This is also referred to as the "thread per request" model. When you run out of threads, your Spring application's throughput begins to suffer.

ab -c 100 http://localhost:8080/

To bombard our service with requests, we use a tool called Apache HTTP Benchmark. The -c flag instructs the server to run 100 concurrent requests at the same time.

You run this test on two services, one written in Spring and one in Vert.x, and there is no difference in performance. Why is this the case?

Unlike Vert.x, Spring Framework does not directly control the number of threads it employs. Instead, the container, in our case, Tomcat, determines the number of threads. Tomcat's default setting for the maximum number of threads is 200. This means that there shouldn't be much of a difference between Spring and Vert. x applications until you have at least 200 concurrent requests. Simply put, you're not emphasizing your application enough.

Set the number of concurrent requests higher than the maximum size of your thread pool if you want to stress your Spring application.

Benchmarking on the Same Machine

Let us return to how Vert. x works. I've already mentioned that Vert. x improves performance by queuing all incoming requests. When a response is received, it is also added to the same queue. There are only a few threads, known as EventLoop threads, that are busy processing that queue. The greater the number of requests, the busier the EventLoop threads become and the more CPU they consume.

What now happens when you run a benchmark on your computer? As an example:

ab -c 1000 http://localhost:8080/

The following is what will happen next. The benchmark tool will attempt to generate as many requests as possible while utilizing all of your machine's CPU resources. The Vert. x service will attempt to serve all of those requests while also attempting to use all of the available resources.

To maximize the performance of the Vert. x application during the benchmark, run it on a separate machine that does not share CPU with the benchmark machines.

This brings us to the following point.

  • The Spring Framework's Performance Is Excellent
  • I've been a huge fan of Vert. x for at least the last 5 years. But first, consider the throughput of the Spring application in the earlier-mentioned benchmarks.
  • Plaintext: 28K
  • JSON serialization: 20K
  • Single query: 14K
  • Fortunes: 6K
  • Multiple querie s: 1,8K
  • Data updates: 0,8K

Conclusion

As software engineers, we enjoy comparing the performance of our favorite programming language or framework to that of others.

It's also critical to use objective metrics when doing so. Measuring service throughput with a benchmark is a good place to start, but it must be done correctly.

Check to see if the test you're running is CPU or I/O bound, or if it has another bottleneck.

Also, run your benchmarks on a separate machine than the one that runs your application code. Otherwise, you might be disappointed with the results.

Finally, I've witnessed companies encountering throughput bottlenecks in their language or framework, and I've even assisted in the resolution of some of them. However, there are many successful businesses out there that may not require all of that throughput, and you may be working for one of them. Creating a good benchmark is difficult and time-consuming. Consider whether that is the most pressing issue you should be addressing. If you have any doubt about the above topic. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

I burned myself out in my first six months as a tech lead. It had always been a goal of mine to be a tech lead, but after accomplishing that goal, I spent some time regretting my decision.

I was putting in a lot more hours as a tech lead during the day and catching up on my ticket work at night. I wasn't sure what the role entailed, so I just did what I saw the previous tech lead do. I was totally unprepared for it.

That was five years ago, and in that time, I've served as a tech lead and engineering manager for three teams. With time, the experience became easier, and the expectations of a tech lead became clearer. There's no reason for new tech leaders to figure it all out on their own — here's my advice on how to be a better tech leader.

First, what exactly is a Tech Lead?

At different companies, the term "tech lead" can mean slightly different things. A team lead is a type of tech lead in which you are responsible for the day-to-day operations and delivery of a quality product from a development team but have no hiring/firing power.

The second type of tech lead is an engineering manager, in which you are a manager with hiring and firing authority as well as people management responsibilities such as regular performance reviews. These can sometimes be combined into a single role. I've also seen it as two distinct roles.

The third type of tech lead is the lead engineer, who is a senior member of the team who occasionally leads smaller projects but primarily in a technical capacity such as doing code reviews, developing a data model for a new project, or architecting the project.

This blog will concentrate on the first type of tech lead, also known as the team lead. Now that that's out of the way, here are some principles to guide your tech leadership.

  • Make Yourself a Shield for Your Team

Handle anything that may disrupt the team, such as responding to Slack users who may have questions about your team or upcoming projects, collaborating with internal support teams on escalated issues, and representing the team's interests in meetings as needed.

This saves the team's time by preventing other people (such as upper management) from making requests directly to your team's developers. This can cause them to become distracted and divert their attention away from important planned work. So your goal is to determine if this is occurring and to communicate clearly to those individuals that such requests must first go through the proper channels.

A lot of this work is really just triaging the issue or question at hand and directing to the appropriate resource.

  • Maintaining Knowledge of What Other Teams Are Working On

Meetings are one of those necessary evils that we like to complain about, but every now and then, some of those meetings contain useful information. In an ideal world, I'd get a five-minute recap of every meeting in which I'm not actively participating, but that's not going to happen.

They provide an opportunity to learn about what's going on in other teams, whether they're teams you work directly with or teams that are more distant from the work you're doing. You may overhear that a team is developing a new service, which gives you the opportunity to say, "Hey! My team is working on a similar project. Let's have a discussion about it!" Every now and then, that interaction saves you a quarter's worth of money.

  • Discussions with the team should be facilitated.

Have you ever been in a team meeting where most of the time is spent in silence, waiting for someone to speak up (especially in a remote setting)? "Should we explore this library?" someone might ask, to which crickets might respond. People are sometimes unsure of how to respond and are afraid of appearing stupid. It is your responsibility as a tech lead to facilitate the discussion with the team so that the team can own — and make — the decision. Assist people in gathering as much information as possible to aid in decision-making. You can request clarification or ask dumb questions.

Half of the time, other people have the same questions but are too shy to ask them because they don't want to be the one who asks a stupid question. You may need to rely on your developers in the early stages of your role. You won't know everything about the codebase, but you must be able to answer questions like "Is feature X possible?" We have the information in system Z." If you've been on the team for a while, you'll be aware of who has access to which parts of the codebase. They'll be the ones to point you in the right direction if you're new.

If you don't have enough information to make a decision, say so and let whoever needs the decision know when you'll be able to make it.

  • Maintain high standards and set a good example.

Don't waste time pointing out stylistic differences. Add a linter and/or a formatter to the project and direct the discussion to the tool. It's one thing for another developer to leave 15 nitpicky comments; it's quite another to ask if we should add a new linter rule.

Where possible, require code changes to be accompanied by unit tests, and track unit test coverage. 70 percent coverage is sufficient for me across the entire project.

Some scenarios are more difficult to cover with unit tests, but these are the exceptions. Unit tests are required for any business logic or unusually specific behaviour; this is the only way to prevent other developers from accidentally breaking code by removing code that they believe is no longer required.

  • Concentrate on the Big Picture

You have to pick your battles, and there are some minor decisions that aren't worth your time. I'd bite my tongue and move on if a developer implemented something in a procedural pattern when you wanted it to be more object-oriented.

These kinds of decisions don't really matter in the long run. What matters is that your team does not push broken changes into production. A few procedural snippets here and there will not derail production.

That is not to say that you should not provide feedback. Instead of the tech lead's personal preferences, it's sometimes more helpful to ask questions to ensure that their proposed solution meets the needs of the problem, such as "Did you consider X or Y?"

You should not instruct your developers on how to carry out their change (unless they are specifically asking for that feedback). If you do this too frequently, they may begin to feel more like code monkeys simply doing what the tech lead says.

By asking the right questions, you can sometimes coach them into a different solution, allowing them to own the idea and implementation more than you saying, "Build it this way." Sometimes you can't because their solution also meets all of the needs equally, and choosing between the two approaches is a matter of personal preference. In that case, don't sweat the small stuff and get on with your life.

Conclusion

As a developer, you won't often need these skills, but they will come in handy. As a result, new tech leaders typically face a learning curve as they figure out the people’s side of leading a team while balancing their own responsibilities. So, if you want to be a better tech leader in 2022, pay attention to the following:

  • Being a bulwark for your team Keeping track of what other teams are working on Facilitating discussions within your team
  • Maintain high standards and set a good example.
  • Consider the big picture.

If you have any questions about the above topic, please do not hesitate to contact us. Your digital partner will be Airo Global Software.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Details on the new features and packages in CRA 5

The Creating React App (CRA) method is a fast way to scaffold a React project. The command npx create-react-app project name> can easily generate it. We can get the most recent packages and the execution environment for a React project with a single command. It is both convenient and efficient.CRA 5 was released on Dec 14, 2021. It has the following new features and new packages:

  • Support for Node 10 and 12 has been discontinued.
  • Enhancements to the Fast Refresh.
  • Package manager detection has been improved.
  • To improve compatibility with other tools, all dependencies were unpinned.
  • Tailwind support has begun.
  • Webpack 5, Jest 27, ESLint 8, and PostCSS 8 have been installed.

Let’s go through these details.

Install create new React App

create-react-app is a global command-line utility that allows you to create new React projects. The created projects use the most recent version of react-scripts, which is currently 5.0.0. CRA 5 no longer supports Node 10 and 12, and it now requires Node 14 or higher. Create-react-app will fail if the node version does not meet the requirement.

% nvm use 12
 Now using node v12.22.7 (npm v6.14.15)
 % node --version
 v12.22.7
 % npx create-react-app my-app
 npx: installed 67 in 3.482s
 You are running Node 12.22.7.
 Creating React App requires Node 14 or higher.
 Please update your version of Node.

The installation is complete after changing the node version to 17.

% nvm use 17
 Now using node v17.1.0 (npm v8.1.2)
 % node --version
 v17.1.0
 % npx create-react-app my-app
 Creating a new React app in /Users/jenniferfu/funStuff/my-app.
 Installing packages. This might take a couple of minutes.
 Installing react, react-dom, and react-scripts with cra-template...
 added 1375 packages in the 30s
 163 packages are looking for funding
  run `npm fund` for details
 Initialized a git repository.
 Installing template dependencies using npm...
 added 33 packages in 4s
 163 packages are looking for funding
  run `npm fund` for details
 Removing template package using npm...
 removed 1 package, and audited 1408 packages in 2s
 163 packages are looking for funding
  run `npm fund` for details
 6 moderate severity vulnerabilities
 To address all issues (including breaking changes), run:
  npm audit fix --force
 Run `npm audit` for details.
 Created git commit.
 Success! Created my-app at /Users/jenniferfu/funStuff/my-app
 Inside that directory, you can run several commands:
  npm start
    Starts the development server.
  npm run build
    Bundles the app into static files for production.
  npm test
    Starts the test runner.
  npm run eject
    Removes this tool and copies build dependencies, configuration files
    and scripts into the app directory. If you do this, you can’t go back!
 We suggest that you begin by typing:
  cd my-app
  npm start

Improve Existing Create React App with react-scripts, which includes scripts and configuration. The Create React App project can be updated by upgrading react-scripts to a specific version. To upgrade to the latest version, the official documentation suggests running the following command:

 npm, install --save react-scripts@latest

By issuing this command, we smoothly upgrade react-scripts from version 4.0.3 to version 5.0.0.

The following are the differences between package.json files:

We have upgraded the react-scripts version. Here are the distinctions between the CRA 4 package. json as well as the CRA 5 package json.

The differences appear to be minor. We can manually update the testing-library and web-vitals versions to match the versions in CRA 5.

TypeScript Version Upgrade

If you're using TypeScript, you can start a new project with the following command:

npx create-react-app --template typescript <project name>

The JavaScript CRA package differs in the following ways. json, as well as the TypeScript CRA package json.

TypeScript has been updated from version 4.1 to version 4.5 from CRA 4 to CRA 5.

Enhancements to the Fast Refresh

Fast refresh has been improved for the Hot Module Replacement (HMR) runtime, with the bailout behaviour described below:

  • If fast refresh is not enabled, reload the page manually.
  • If fast refresh is enabled and there are updated modules, rely on fast refresh to be error-resistant and skip the forced reload.
  • If fast refresh is enabled, no modules have been updated, and the status of the hot update is aborted or failed, perform a forced reload.

Improved Detection of Package Managers

npx create-react-app in CRA 4 If the yarn is installed, my-app will use it to install dependencies. Alternatively, a flag can be set to use npm:

npx create-react-app my-app --use-npm

In CRA 5, this behaviour has been modified. If the env variable npm config user agent is set to 'yarn,' the package manager will be yarn:

function isUsingYarn() {
  return (process.env.npm_config_user_agent || '').indexOf('yarn') === 0;
 }

Otherwise, it is determined by how the command is executed:

yarn create-react-app my-app // use yarn
npm init react-app my-app // use npm
npx create-react-app my-app // use npm

Dependencies that aren't pinned

In CRA 5, the following react-scripts can be found in the installed package-lock.json:

"react-scripts": {
  "version": "5.0.0",
  "resolved": "https://registry.npmjs.org/react-scripts/-/react-scripts-5.0.0.tgz",
  "integrity": "sha512-3i0L2CyIlROz7mxETEdfif6Sfhh9Lfpzi10CtcGs1emDQStmZfWjJbAIMtRD0opVUjQuFWqHZyRZ9PPzKCFxWg==",
  "requires": {
    "@babel/core": "^7.16.0",
    "@pmmmwh/react-refresh-webpack-plugin": "^0.5.3",
    "@svgr/webpack": "^5.5.0",
    "babel-jest": "^27.4.2",
    "babel-loader": "^8.2.3",
    "babel-plugin-named-asset-import": "^0.3.8",
    "babel-preset-react-app": "^10.0.1",
    "bfj": "^7.0.2",
    "browserslist": "^4.18.1",
    "camelcase": "^6.2.1",
    "case-sensitive-paths-webpack-plugin": "^2.4.0",
    "css-loader": "^6.5.1",
    "css-minimizer-webpack-plugin": "^3.2.0",
    "dotenv": "^10.0.0",
    "dotenv-expand": "^5.1.0",
    "eslint": "^8.3.0",
    "eslint-config-react-app": "^7.0.0",
    "eslint-webpack-plugin": "^3.1.1",
    "file-loader": "^6.2.0",
    "fs-extra": "^10.0.0",
    "fsevents": "^2.3.2",
    "html-webpack-plugin": "^5.5.0",
    "identity-obj-proxy": "^3.0.0",
    "jest": "^27.4.3",
    "jest-resolve": "^27.4.2",
    "jest-watch-typeahead": "^1.0.0",
    "mini-css-extract-plugin": "^2.4.5",
    "postcss": "^8.4.4",
    "postcss-flexbugs-fixes": "^5.0.2",
    "postcss-loader": "^6.2.1",
    "postcss-normalize": "^10.0.1",
    "postcss-preset-env": "^7.0.1",
    "prompts": "^2.4.2",
    "react-app-polyfill": "^3.0.0",
    "react-dev-utils": "^12.0.0",
    "react-refresh": "^0.11.0",
    "resolve": "^1.20.0",
    "resolve-url-loader": "^4.0.0",
    "sass-loader": "^12.3.0",
    "semver": "^7.3.5",
    "source-map-loader": "^3.0.0",
    "style-loader": "^3.3.1",
    "tailwindcss": "^3.0.2",
    "terser-webpack-plugin": "^5.2.5",
    "webpack": "^5.64.4",
    "webpack-dev-server": "^4.6.0",
    "webpack-manifest-plugin": "^4.0.2",
    "workbox-webpack-plugin": "^6.4.1"
  }
 }

All versions rely on the caret dependencies, which means that these packages will use the most recent minor version.

Many packages in CRA 4's react-scripts, for example, pin to the exact versions.

CRA 5 removes the babel-loader, which was causing problems when using CRA with Storybook. Furthermore, CRA 5 unpins all dependencies for improved compatibility with other tools.

What else do you discover?

  • tailwindcss (^3.0.2) is the latest

    • Packages that are updated: webpack: (^5.64.4), jest: (^27.4.3), eslint: (^8.3.0), and postcss: (^8.4.4).

Tailwind Support

Tailwind is a CSS framework that includes classes such as flex, text-5xl, font-bold, text-green-500, and others. These classes can be combined to create any design, right in the markup.

Tailwind searches for class names in HTML files, JavaScript components, and other templates. It generates the appropriate styles and saves them to a static CSS file. Tailwind is quick, adaptable, and dependable — with no downtime. Tailwind support has been added to CRA 5.

Tailwind is normally set up and used in 5 steps. With the pre-configured CRA 5, only three steps are required:

Step 1: Set up the template paths.

Create the tailwind.config.js configuration file in the root directory:

module.exports = {
  content: [
    './src/**/*.{html,js,jsx}',
  ],
  theme: {
    extend: {},
  },
  plugins: [],
 }

Step 2: Include the Tailwind directives in your CSS file.

Here's the src/index.css file:

@tailwind base 
@tailwind components;
 @tailwind utilities;

Step 3: Integrate Tailwind into React components.

Here's an example of a src/App.js file:

import './App.css';
 function App() {
  return (
    <  div className="App">
      <  h1 className="text-5xl font-bold text-green-500" >Create React App 5
    
 );
}
export default App;

text-5xl sets font-size: 3rem and line-height: 1. font-bold sets font-weight: 700. text-green-500 sets color: rgb(34 197 94).

When we run the code, npm start, we can see that Tailwind styles have been applied to the text:

5th version of Webpack

Webpack is a module packager. CMJ, AMD, UMD, ESM, and other modules can be bundled.

On October 10, 2020, Webpack 5 was released, with the following major features:

  • Persistent Caching improve build performance.
  • Long-Term Caching has been improved with new algorithms and defaults.
  • Bundle size has been increased due to improved Tree Shaking and Code Generation.
  • Module Federation was introduced, allowing multiple Webpack builds to work together.

Webpack 5 is included with CRA 5.

Jest 27

Jest is a JavaScript Testing Framework that focuses on test creation, execution, and structuring. Jest is a popular test runner that works with projects that use Babel, TypeScript, Node, React, Angular, Vue, and other technologies.

  • On May 25, 2021, Jest 27 was released, with the following major features:
  • To update the failed snapshots during snapshot tests in watching mode, type u. The interactive mode can now be used to step through failed tests one by one. We can skip the failed test by typing, exit the interactive mode by typing q, or return to the watch mode by pressing Enter.
  • When compared to Jest 26, the initialization time per test file was reduced by 70%.
  • User configurations written in ESM are supported, and all pluggable modules can load ESM.
  • Text files that are symlinked into the test directory have been enabled, a feature requested by Bazel.
  • Transform is now asynchronous, which is a feature requested by esbuild, Snowpack, and Vite.
  • The default test runner has been changed from jest-jasmine2 to jest-circus, and the default test environment has been changed from 'jsdom' to 'node'.
  • The new Fake Timers implementation in Jest 26 becomes the default.
  • The done test callback cannot be called more than once, and calling done and returning a Promise cannot be combined.
  • A described block must not produce any results.
  • Jest 27 is packaged with CRA 5.

ESLint 8

ESLint is a tool for detecting and reporting patterns in JavaScript and TypeScript code. It performs traditional linting to detect problematic patterns, as well as style checking to enforce conventions. On October 9, 2021, ESLint 8 was released, with the following major features:

  • Support for nodes 10, 13, and 15 has been removed.
  • The coding frame and table formatters have been removed.
  • The comma-dangle rule schema tightens up.
  • Unused disable directives can now be repaired with --fix.
  • 4 rules have been proceeding in the eslint: recommended preset: no-loss-of-precision,

    no-nonoctal-decimal-escape,

no-unsafe-optional-chaining, and

no-useless backreference.
  • The ESLint class has taken the place of the CLIEngine class.
  • The obsolete linter object has been deprecated.
  • The /lib entry point is no longer available.
  • ESLint 8 is included with CRA 5.

PostCSS 8

PostCSS is a style transformation tool that uses JS plugins. Autoprefixer, for example, is a popular plugin that applies CSS prefixes based on browser popularity and property support.

On September 15, 2020, PostCSS 8 was released, with the following major features:

  • It has a new plugin API that allows all plugins to share a single CSS tree scan. It speeds up CSS processing by up to 20%. It reduces the size of node modules, supports a better source map, and improves the CSS parser.
  • Support for nodes 6, 8, 11, and 13 has been removed.
  • It serves ES6+ sources from the npm package without the need for Babel compilation.
  • The rarely used postcss.vendor API has been removed, and CRA 5 is now packaged with PostCSS 8.

Conclusion

CRA 5 has arrived, bringing with it new features and packages. CRA 5 will be used in the newly created project. If you already have a CRA 4 project, upgrade it as described above.

Thank you for your time. I hope you found this information useful. If you have any questions, please do not hesitate to contact us. Your digital partner will be Airo Global Software.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

When we first start learning Angular, we learn that there are two types of directives: Attribute directives and structural directives. We will only look at Structural Directives in this section. This includes the ability to remove an element and replace it with something else, as well as the ability to create additional elements.

As you are aware, we must distinguish Structural directives from Attribute directives in code: Structural directives should be preceded by *: *ngFor, *ngIf. Actually, when I first read this, I thought the distinction was strange and even cumbersome. Let's see if we can figure out why we need this * for the structural directive.

We will implement three different structural directives throughout the article to help you grasp the main idea.

What is ng-template?

Before we go any further, let's make sure we're all on the same page and understand what ng-template is. Let's create a simple component using this element to see what Angular actually renders:

As you can see, we defined an ng-template with a span element inside the component template. However, we do not see this span in a browser. Doesn't it appear to be a waste of time? Wait a minute, of course, it's useful and serves a purpose.

What is ng-container?

Let's look at it again with a component creation:

We can see the content that we put inside the ng-container here, but the container itself is hidden. If you're familiar with React, you'll probably recognize the behaviour a fragment or abbreviation for it.

Connect ng-container and ng-template.

Actually, we can ask Angular to render content that we explicitly place inside of the ng-template. To do so, we must first complete the following steps: step 1: get a reference to ng-template in a component.

step 2: get a reference to a container (any DOM element) where we want to render ng-content. template's

Step 3: Render content in a container programmatically.

Step 1: we define a template reference #template for the ng-template element and gain access to it via the ViewChild decorator. (you can also ask for it using this: @Input('template', read: TemplateRef ) template: TemplateRefany>).

Step 2: In the template, define a container where we want to render a predefined template and get access to it in component:

We want to read it as ViewContainerRef. Keep in mind that we can use any DOM element for containers, not just ng-container, but it keeps our layout clean because Angular doesn't leave any ng-container feet in the layout.

Step 3: We'll only have access to the container and template during the ngAfterViewInit lifecycle, and we'll need to render a template in a container: We simply generated a view from a template and inserted it into the container.

Structural Directive

You may wonder why, rather than explaining structural directives first, I started with ng-template and ng-container. However, it is necessary to explain why we put * before these directives. And the answer is that when Angular sees *, it treats our template differently and adds the following elements: Angular encircles our template with the ng-template element. That is, if the ngFor directive was not implemented, we would see nothing. Angular also creates a placeholder space called embedded view, where the directive can decide what to insert inside of this empty view container, for example, inserting the content of ng-template in the specific time as we did above.

Example 1: Create your own ngIf directive. Assume that Angular does not have a built-in directive like ngIf and that we must create our own with the name customIf. Let's build it with the Angular CLI: ng g d directives/custom-if It automatically creates a custom-if.directive.ts file in the directives folder and declares it in AppModule:

@Directive({
  selector: '[appCustomIf]'
})
export class CustomIfDirective {
  constructor() { }
}

Because Angular does some work behind the scenes — wrapping our template in ng-template and creating a placeholder for any content — we can ask Angular to provide access to those elements in a function native code:

@Directive({
  selector: '[appCustomIf]'
})
export class CustomIfDirective {
  constructor(
     private template: TemplateRef,   
     private container: ViewContainerRef) { }
}

@Directive({ selector: '[appCustomIf]' }) export class CustomIfDirective { @Input() appCustomIf!: boolean; constructor( private template: TemplateRef,
private container: ViewContainerRef) { } }

If @Input is true, the final step is to render the template in a container in the ngOnInit method:

@Directive({
  selector: '[appCustomIf]'
})
export class CustomIfDirective implements OnInit {
  @Input() appCustomIf!: boolean;
  constructor(
     private template: TemplateRef,   
     private container: ViewContainerRef) { }
  ngOnInit() {
     if (this.appCustomIf) {   
          this.container.createEmbeddedView(this.templateRef);
      }
   }
}

Congratulations! You've carried out the first structural directive. However, I believe that implementing the custom ngFor directive would be more interesting. Let's give it a shot.

Example 2: Creating a custom ngFor directive.

Let us recall how ngFor is used:

< ul>
  < li *ngFor="let value of values; let index">
    {{index}} {{value}}
  < /li>
< /ul>

It may appear strange given that we know we can bind to directive only JS expressions that produce a single value. However, this one, which is used, generates multiple values, let the value of values. The first source of confusion may be that we attempt to map keywords with JS keywords that we use in conjunction with for...of, but it has nothing in common with this one. Angular has its own DSL language, but we can use any word we want. Let us proceed in chronological order.

First and foremost, the expression on the right side of directive ngFor is known as macro syntax. Let us try to describe its structure: In this case, context can be anything with which we want to render a template in the target container, but it must be set as an element along with values during iteration. Remember that we typically define a template type as TemplateRefany>, which is a type of context for our template.

Which name should we use to gain access to value in values is the more interesting part here. There are three parts:

  • The first part is the name of the directive (in our case, ngFor).
  • The second part is the name of the word before value (in our case, of).
  • The third part is the name of the word after value (in our case, of).

As I previously stated, you can use any word you want instead of, for example, iterate, and access to the value will be via @Input('ngForIterate'), which is also known as a binding key.

So far, so good, I hope. Let's get started with our customFor directive. As is customary, let's use Angular CLI to build scaffolding for a directive:

directives/customFor ng gd

@Directive({
  selector: '[appCustomFor]'
})
export class CustomForDirective {
  constructor() { }
}

|

To spice things up, let's define our microsyntax for the developing directive:

< ul *appCustomFor="let value iterate values; let index">
  < li>{{index}} {{value}}< /li>
< ul>

With the following decorator, we can gain access to values:

@Input('appCustomForIterate') items: any[]

We used the following API to render a template in a container:

this.containerRef.createEmbeddedView(this.templateRef)

and the method createEmbeddedView accepts the second argument, which is a template context:

this.containerRef.createEmbeddedView(this.templateRef, {
  '$implicit': '' // any value which we want
  index: 0 // any value which we want
})

Keep an eye out for the $implicit key, which in our case manipulates value for value in our expression. Let's take a look at how we might put this directive into action: import { Directive, Input, OnInit, TemplateRef, ViewContainerRef } from '@angular/core';

@Directive({
  selector: '[appCustomFor]',
 })
 export class CustomForDirective implements OnInit {
  @Input('appCustomForIterate') items!: any[];
  constructor(
    private templateRef: TemplateRef<{'$implicit': any, index: number}>,
    private containerRef: ViewContainerRef
  ) {}
  ngOnInit() {
    for(let i = 0; i< this.items.length; i++){
    this.containerRef.createEmbeddedView(this.templateRef, {
        index: i,
        '$implicit': this.items[i]
      })
    }
  }
 }

Pay attention to the fact that I purposefully changed the key used in the built-in directive ngForinto iterate, so we can use this directive as follows:

< div *appCustomFor="let value iterate items; let i = index">
 
 {{value}} {{i}}
 < /div>

Example 3: Structural Directive-compliant custom carousel

Let's look at a more concrete example for production. Let us suppose we need to use Carousel. We must pass a list of images for the carousel to the directive, and the custom directive must display one current image with the option to move forward/backward.

< div *appCarousel="let image of images; let ctr = ctr">
 
 
 < img [src]="image" />
 
 < button (click)="ctr.prev()">Prev
 
 < button (click)="ctr.next()">Next
 
< /div>

Let's begin as usual by creating a directive with Angular CLI and injecting TemplateRef and ViewContainerRef into the function native code. Also, we need to get access to the value in the images variable, which we can do with a key binding @Input('appCarouselOf'):

import { Directive, Input, OnInit, ViewContainerRef, TemplateRef } from '@angular/core';
 
@Directive({
  selector: '[appCarousel]',
 })
 export class CarouselDirective {
  @Input('appCarouselOf') images!: string[];
 
 currentIndex = 0;
  
 constructor(
    private templateRef: TemplateRef,
    private viewContainer: ViewContainerRef
  ) {}
 }

So far, everything should be familiar. So, let's create a method that is in charge of template rendering in a container. Before we begin, keep in mind that the usage of this directive allows for the creation of images. In the template context, let ctr = ctr, we must pass two variables: $implicit maintains the current carousel image, ctr — controller in charge of image rotation

@Directive({
  selector: '[appCarousel]',
 })
 export class CarouselDirective implements OnInit {
  // skipped for brevity
  renderCurrentSlide(){
    this.viewContainer.clear();
   this.viewContainer.createEmbeddedView(this.templateRef, {
        ctr: this,
        '$implicit': this.images[this.currentIndex]
    })
  }
 }

And now we'll implement two methods that will be available in the controller: next and previous:

@Directive({
  selector: '[appCarousel]',
 })
 export class CarouselDirective implements OnInit {
  // skipped for brevity
  next(){
    this.currentIndex = this.currentIndex === this.images.length - 1 ? 0 : this.currentIndex + 1;
    this.renderCurrentSlide();
  }
  prev(){
    this.currentIndex = this.currentIndex - 1 < 0 ? this.images.length - 1: this.currentIndex - 1;
    this.renderCurrentSlide();
  }
 }

The full implementation:

import { Directive, Input, OnInit, ViewContainerRef, TemplateRef } from '@angular/core';
 @Directive({
  selector: '[appCarousel]',
})
export class CarouselDirective implements OnInit {
  @Input('appCarouselOf') images!: string[];
   currentIndex = 0;
   constructor(
    private templateRef: TemplateRef,
    private viewContainer: ViewContainerRef
  ) {}
   ngOnInit() {
    this.renderCurrentSlide();
  }
   renderCurrentSlide(){
    this.viewContainer.clear();
    this.viewContainer.createEmbeddedView(this.templateRef, {
        ctr: this,
        '$implicit': this.images[this.currentIndex]
    })
  }
 
  next(){
    this.currentIndex = this.currentIndex === this.images.length - 1 ? 0 : this.currentIndex + 1;
    this.renderCurrentSlide();
  }
   prev(){
    this.currentIndex = this.currentIndex - 1 < 0 ? this.images.length - 1: this.currentIndex - 1;
    this.renderCurrentSlide();
  }
}

If you have any doubt about Mastering Angular Structural Directives. Please Contact us through the given email. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Every software developer with little experience understands the value of keeping things simple and stupid (KISS). You don't want to repeat yourself once you've learned how to use classes and functions, so keep things DRY. The goal of all of these principles is to reduce mental complexity in order to make software easier to maintain.

  • Don’t repeat yourself (DRY)

The DRY principle is at the heart of software development. Our code is organized into packages and modules. We eliminate functions. We try to make the code reusable so that we can hopefully maintain it more easily.

Benefits: Reduced complexity. The more code you have, the more maintenance you'll have to do. DRY usually results in less code. This means that for typical changes, you only need to make one adjustment.

Risk: When you do it too often, the code tends to become more complex.

Tooling support: There are programmes that can detect duplicated code. There is one for Python.

pylint --disable=all --enable=similarities src
  • You Ain’t Gonna Need It (YAGNI)

The realization that too much abstraction actually harms maintainability is referred to as YAGNI. I'm looking at you, Java developers!

Benefit: Reduced complexity. The removal of abstractions clarifies how the code works.

Risk: You will have difficulty extending your software if you use YAGNI too much and thus make too few abstractions. Furthermore, junior developers may tamper with the code in an unfavourable way.

Tooling support: None

  • Keep it Simple and Stupid (KISS)

KISS can be applied to a variety of situations. Although some solutions are smart and solve the problem at hand, the dumber solution may be preferable because it has less of a chance of introducing problems. This may occasionally be less DRY.

  • Principle of Least Surprise

Design your systems so that the location of feature implementation, as well as the behaviour and side-effects of a component, are as unsurprising as possible. Keep your coworkers informed.

Benefit: Reduced complexity. You ensure that the system's mental model corresponds to what people naturally assume.

Risk: You may need to break DRY in order to complete this task.

Tooling support: None. However, there are some indications that this was not followed:

You're explaining the same quirks of your system to new colleagues over and over.

You'll have to look up the same topic several times.

You feel compelled to document a topic that is not inherently difficult.

  • Separation of Concerns (SoC)

Every package, module, class, or function should be concerned with only one issue. When you try to do too many things, you end up doing none of them well. In practice, it is most visible in the separation of a data storage layer, a presentation layer, and a layer containing the business logic. Other types of concerns could include input validation, data synchronization, authentication, and so on.

Benefit: Reduced complexity: It's usually easier to see where changes need to be made.

There should be fewer unfavourable side effects to consider.

People can work in parallel without encountering a slew of merge conflicts.

Risk: If you go overboard on SoC, you will almost certainly violate KISS or YAGNI.

Tooling support: Cohesion can be measured by counting how many classes/functions from other packages are used. A large number of externally imported functions may indicate SoC violations. A large number of merge conflicts may also indicate a problem.

  • Fail early, fail loud

As developers, we must deal with a wide range of errors. And it's unclear how to deal with them, especially for beginners.

To fail early is a pattern that has helped me a lot in the past. That is, the error should be recognized very close to the location where it can occur. User input, in particular, should be validated directly in the input layer. However, network interactions are another common scenario in which error cases must be handled.

The other pattern is to fail loudly, which means to throw an exception and log a message. Don't simply return None or NULL. Exceptions should be made. Depending on the type of Exception, you may also want to notify the user.

Benefit: Easier to maintain because it is clear where functionality belongs and how the system should be designed. Errors occur earlier, making debugging easier.

Risk: None.

Tooling support: None

  • Defensive Programming

The term "defensive programming" is derived from the term "defensive driving." Defensive driving is defined as "driving to save lives, time, and money regardless of the circumstances around you or the actions of others." Defensive programming is the concept of remaining robust and correct in the face of changing environmental conditions and the actions of others. This can mean being resistant to incorrect input, such as when using an IBAN field in a database to ensure that the content stored there contains an IBAN. It may also imply making assertions explicit and raising exceptions if those assertions are violated. It may imply making API calls idempotent. It may imply having a high level of test coverage in order to be defensive against future breaking changes.

Three fundamental rules of defensive programming

  • Until proven otherwise, all data is relevant.
  • Unless proven otherwise, all data is tainted.
  • Until proven otherwise, all code is insecure.

"Shit in shit out" is an alternative to defensive programming.

Benefit: Higher robustness

Risk: Increased maintenance as a result of a more complex/lengthy code base

Tooling support: Check your coverage to see how much of your unit tests are covered. Try mutation testing if you want to go crazy. There is chaos engineering for infrastructure. Load testing is done.

SOLID

The SOLID principles provide guidance in the areas of coupling and cohesion. They were designed with object-oriented programming (OOP) in mind, but they can also be applied to abstraction levels other than classes, such as services or functions. Later on, I'll simply refer to those as "components."

Two components can be linked in a variety of ways. For example, one service may require knowledge of how another service operates internally in order to perform its functions. The more component A is dependent on component B, the more A is coupled with B. Please keep in mind that this is an asymmetrical relationship. We don't care about the direction of coupling.

One module's high cohesion indicates that its internal components are tightly linked. They are all about the same thing.

We strive for loose coupling and high cohesion between components.

The principle of single-responsibility

"A class should never change for more than one reason."

The roles of software entities such as services, packages, modules, classes, and functions should be clearly defined. They should typically operate at a single abstraction level and not do too much.

One tool for achieving separation of concerns is single responsibility.

Tooling support:

I'm not aware of any automated tools for detecting violations of the principle of single responsibility. You can, however, try to describe the functionality of the components without using the words "and" or "or." If this does not work, you may be breaking the law.

The open-close principle

"Software entities... should be open to extension but not to modification."

If you modify a component on which others rely, you risk breaking their code.

The substitution principle of Liskov

"Functions that use pointers or references to base classes must be able to use derived class objects without being aware of it."

Benefit: This is a fundamental assumption in OOP. Simply follow it.

Risk: None

The principle of interface segregation

"A number of client-specific interfaces are preferable to a single general-purpose interface."

Benefit: It's easier to extend software and reuse interfaces if you have the option to pick and choose. However, if the software is entirely in-house, I would rather create larger interfaces and split as needed.

Risk: Violation of KISS.

The principle of dependency inversion

"Rely on abstractions rather than concretions."

In some cases, you may want to operate on a broader class of inputs than the one you're currently dealing with. WSGI, JDBC, and basically any plugin system come to mind as examples. You want to define an interface on which you will rely. The components must then implement this interface.

Assume you have a programme that requires access to a relational database. All queries for all types of relational databases could now be implemented. Alternatively, you can specify that the function receives a database connector that supports the JDBC interface.

Benefit: In the long run, this makes the software much easier to maintain because it is clear where functionality is located. It also aids in KISS.

Risk: Overdoing it may result in a violation of KISS. A good rule of thumb is that an interface should be implemented by at least two classes before it is created.

  • Locality Principle

Things that belong together should not be separated. If two sections of code are frequently edited together, try to keep them as close together as possible. At the very least, they should be in the same package, hopefully in the same directory, and possibly in the same file — and if you're lucky, they should be in the same class or directly below each other within the file.

Benefit: Hopefully, this means fewer merge conflicts. When you try to find that other file, you do not need to switch context. When you refactor that piece, you may recall everything that belongs to it.

Risk: Violating loose coupling or concern separation.

Tooling support: So far, none, but I'm considering making one for Python. Essentially, I would examine the git commits.

If you have any doubt about the above topic. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected] enter image description here Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Typescript ended the year with a fantastic 4.1 release. There, the long-awaited Template Literal String and Remapping features were introduced. These opened the door to a lot of new possibilities and patterns.

However, the 4.1 release was only laying the groundwork for Template Literal Types. This feature has matured over the course of the 2021 releases. For example, we can now use Template Literal Types as union discriminates after the 4.5 Release.

There have been four Typescript releases in 2021. They are jam-packed with fantastic features. The core and developer experiences have been vastly improved. In this blog, I will summarise my top 2021 picks. Those are the ones that have the most influence on my daily Typescript.

  • Tuples' Leading Elements

Typescript has long supported the Tuple basic type. It enables us to express a predetermined number of array-type elements.

let arrayOptions: [string, boolean, boolean];

arrayOptions = ['config', true, true]; // works

arrayOptions = [true, 'config', true];

//             ^^^^^  ^^^^^^^^^

// Does not work: incompatible types

function printConfig(data: string) {

 console.log(data);

}

printConfig(arrayOptions[0]);

As part of the tuple, we can define optional elements:

// last 2 elements are optional
let arrayOptions: [string, boolean?, boolean?];
//  A required element cannot follow an optional element
let arrayOptions: [string, boolean?, boolean];

We are implying that our array can have multiple lengths by using the optional modifier. We can even go one step further and define a dynamic length array of the following type:

//  last 2 elements are optional
let arrayOptions: [string, ...boolean[]];
// the below will be all valid
arrayOptions = ['foo', true];
arrayOptions = ['foo', true, true];
arrayOptions = ['foo', true, true, false];

Tuples become more powerful in this new TypeScript. We could previously use the spread operator, but we couldn't specify the last element types.

However, prior to the release of 4.2, the following would be incorrect:

//  An optional element cannot follow a rest element.
let arrayOptions: [string, ...boolean[], number?];

Prior to 4.2, the rest operator had to be the tuple's final element. That is no longer required in the 4.2 release. We can add as many trailing elements as we want without being limited by that constraint. We cannot, however, add an optional element after a spread operator.

//  Prior to 4.2, Error: rest element must be last in a tuple type
let arrayOptions: [string, ...boolean[], number];
// works from 4.2
let arrayOptions: [string, ...boolean[], number];
//  Error: An optional element cannot follow a rest element
let arrayOptions: [string, ...boolean[], number?];

Let’s see more details:

let arrayOptions: [string, ...boolean[], number];
arrayOptions = ['config', 12]; // works
  • Mistakes on Always-Truthful Promise Checks

TypeScript will throw an error when asserting against a promise starting with the 4.3 release, as part of the strictNullChecks configuration.

function fooMethod(promise: Promise) {

   if (promise) {

   // ^^^^^^^^^

   // Error: This condition will always return true since this 'Promise'

   // appears to always be defined.

   // Did you forget to use 'await'

       return 'foo';

   }

   return 'bar';

}

Because the if condition will always be true, the compiler requests that we modify the if statement.

In the config compiler options, there is no additional flag to configure this.

  • Const variables preserve type Guards references.

When asserting an if statement, Typescript will now perform some additional work. If the variable is const or read-only, it will keep its type guard, if it has one go through the below code:

function trim(text: string | null | undefined) {
 const isString = typeof text === "string";
 // prior to 4.4, this const doesn't work as a Type Guard
 if (isString) {
   return text.trim();
   //     ^^^^
   //  Prior to 4.4: Object is possibly 'null' or 'undefined'
   //  Works on 4.4 and onwards
 }
 return text;
}

If the isString variable was of let scope, the preceding code would fail. The Type Guard would be ignored, and the code would fail as shown below:

// isString is instead declared as let
let isString = typeof text === "string";
//  this statement won't work as a Type Guard
if (isString) {
...
}
  • Combining several variables

Type Guard aliases are now smarter and can understand multiple variable combinations.

function concatUppercase(a: string | undefined, b: string | undefined) {
 const bothNonEmpty = a && b;
 //  the Type Guard will work for a and b
 if (bothNonEmpty) {
   //  a and b are of type string
   return `${a.toUpperCase()} ${b.toUpperCase()}`
 }
 return undefined;
}

Both Type Guards are stored in the bothNonEmpty const variable. Within the if statement, a and b are both of the string types. It works transitively When combining variables with Type Guards, those will still be propagated. That means you'll be able to combine as many Type Guards as you want without losing any typing information.

function trim(text: string | null | undefined) {
 const isString = typeof text === "string";
 // prior to 4.4, this const doesn't work as a Type Guard
 if (isString) {
   return text.trim();
   //     ^^^^
   //  Prior to 4.4: Object is possibly 'null' or 'undefined'
   // Works on 4.4 and onwards
 }
 return text;
}

In the preceding example, we can see that both the NonEmpty property and the Type Guard information are retained.

In conclusion, the Control Flow Analysis has been greatly enhanced. The best part is that it will work right away in Typescript 4.4.

  • Specific Optional Property Types

When working with Typescript, there is a recurring debate: should a property be made optional or undefined? It all comes down to personal preference.

So, what's the issue? The Typescript compiler treats both equally. As a result, there is some inconsistency in the code and some friction.

Let’s see an example:

interface User {
 nickName: string;
 email?: string;
// is considered equivalent to
interface User {
 nickName: string;
 email: string | undefined;
}

To put a stop to this inconsistency, Typescript now has a flag called —exactOptionalPropertyTypes. When enabled, it will generate an error if you attempt to treat an optional value as nullable and vice versa. Consider the following code with —exactOptionalPropertyTypes set to true:

interface User {
 nickName: string;
 email?: string;
}
//  Error: Type 'undefined' is not assignable to type 'string'
const user1: User = {
 nickName: 'dioxmio',
 email: undefined
}
//  Works fine, email is optional
const user2: User = {
 nickName: 'max',
}

The code above would be fine if the —exactOptionalPropertyTypes option was not enabled.

To avoid any unintended consequences, it is disabled by default. It is up to us to decide whether or not it is a feature worth having.

  • Symbol Index Signatures and Template Literal Strings

In index signatures, Typescript 4.4 now supports symbol, union, and template literal strings. Unions are allowed as long as they are made up of a string, number, or symbol.

An example using the symbol:

interface Log {
 // symbols are now supported and key type
 [x: symbol]: string;
}
const warn = Symbol('warn');
const error = Symbol('error');
const debug = Symbol('debug');
const log: Log = {};
log[warn] = 'A warning has occurred in line X';

Code using

literal template string:
interface Transaction {
 // template literal string
 [x: `amex-${string}`]: string;
}
const log: Transaction = {};
log['amex-123456'] = '$120';

We can reduce a lot of boilerplate by being able to use unions. We can express our interfaces and types more clearly.

// unions, template literal string and symbols are now supported
interface Foo {
 [x: string | number | symbol | `${string}-id`]: string;
}
// the code above is Equivalent to
interface Foo {
 [x: string]: string;
 [x: number]: string;
 [x: symbol]: string;
 [x: `${string}-id`]: string;
}

The index signatures aren't perfect yet. They have constraints. They continue to lack support for Generic Types and Template Literal Types:

type dice = 1 | 2 | 3 | 4 | 5 | 6;
interface RecordItem {
 //  generics are not supported
 [x: K]: V;
 //  template literal types are not supported
 [x: `${dice}x${dice}`]: string;
}

Nonetheless, this is a fantastic feature addition that will allow us to create more powerful interfaces with fewer lines of code.

  • The Desired Kind

Prior to 4.5, we had to use the infer functionality, as shown below, to determine the return type of a Promise:

type Unwrap = T extends PromiseLike ? U: T;
const resultPromise = Promise.resolve(true);
//  resultUnwrapType is boolean
type resultUnwrapType = Unwrap;

A new type Awaited is included in the 4.5 release. We don't need a custom mapped type like the one described by Unwarp above.

Syntax:

type Result = Awaited;

Use case examples:

/ type is a string
type basic = Awaited>;
//  type is string
type recursive = Awaited>>;
//  type is boolean
type nonThenObj = Awaited;
// type is string | Date
type unions = Awaited>>;
type FakePromise = { then: () => string };
//  type is never
type fake = Awaited;
  • Type-Only Import Parameters

This instructs the TypeScript compiler that this import only contains TypeScript types. What difference does it make? When converting the code to JavaScript, the TSC can safely stripe that import.

//  importing the FC type from React
import type { FC } from 'react';
import { useEffect } from 'react';

As you can see above, the issue is that if you want to be specific about your import types, you must sometimes import statements. You can continue to do the following:

import { FC, useEffect } from 'react';

However, you are sacrificing some readability. You can mix them together starting with version 4.5.

import { type FC, useEffect } from 'react';

This clarifies the code without adding any extra boilerplate.

Conclusion

Typescript has grown in popularity over the years. We can anticipate Typescript becoming the default language for JavaScript-based projects in the near future. In 2021, Typescript has greatly improved. Its core has become smarter, allowing us to rely more on inference. As a result, it is less intrusive and easier to transition from JavaScript code.

The year 2022 is also looking exciting. There are some cool elements on the horizon. If you have any doubt about the best TypeScript features. Don’t hesitate to contact us. Airo Global Softwarewill be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Before we begin, a basic understanding of JavaScript and object-oriented programming is required. I'll try to be as thorough as possible, but I won't go through the basics of JavaScript again.

  • User data is retrieved via an API, then formatted and displayed.

To get started, clone this GitHub repository. It includes a simple React app that shows a list of ten employees. Each employee has a first and last name, as well as an email address, a photo, and the date of registration. The employee data is fetched in a JSON file when the React application is launched. The file public/users.json is located in the public folder.

This React application's framework is quite standard. The following are the three primary folders:

  • Both conventional components, such as the Date and Email components, and layout components, such as the Header and the Main wrapper, are found in the components folder.
  • The Home component is located in the pages folder. We'll have a lot of page components in a bigger project.
  • Both the JSON mock file and the randomuser API are accessed through the services folder.

The pages/Home component, as well as the React Query setup, are used in the App.js file.

The data synchronization aspect of the project is done with React-query. This library is based on the react-apollo library and enables for declarative creation of HTTP requests. Although I do not believe the documentation is always clear, it is really useful and readable.

You must wrap your entire React application in the QueryClientProvider component to make it work:

function App() {

   return (

       < QueryClientProvider client={queryClient}>

           < HomePage />

       < /QueryClientProvider>

   )

}

Then, as in the case of the pages/Home component, you can utilise the useQuery hooks:

const Page = () => {
    const { isLoading, error, data } = useQuery(
        'users',
        () => get(),
        {
            refetchOnWindowFocus: false
        }
        )

    if (isLoading) return 
Loading...
if (error) return
An error occurs...
return ( < Body> < Main> < Header> < PageTitle text='Students' /> < /Header> { data.map(user => ) } < /Main> < /Body> ) }

The data is presented in the UserCard component once the promise is resolved. This component is supported by a number of additional components. For example, the Image component shows the user's photo, the Name component shows the user's name and the Email component shows the user's email address. Drilling props are used to transfer data from top to bottom components.

const Component = ({ user }) => (

   < Wrapper>

       < UserImage

           firstName={user.first_name}

           lastName={user.last_name}

           picture={user.picture}

       />

       < UserName

           firstName={user.first_name}

           lastName={user.last_name}

       />

       < UserJoinedDate date={user.registered_date} />

       < UserEmail email={user.email} />

   < /Wrapper>

)

For the time being, our programme relies on the JSON file in the public folder, which works perfectly. But now it's time to make things a little more complicated: instead of using the JSON file, we'll use the Random User Generator API. Refresh the application after changing getMockData to getApiData in the get method of the services/Api/index.js file.

async function getMockData() {

   return await fetch('http://localhost:3000/users.json')

       .then(res => res.json())
       .then(({ data }) => data)
}
async function getApiData() {

   return await fetch('https://randomuser.me/api/?results=10')

       .then(res => res.json())

       .then(({ results}) => results)

}
async function get() {

   return await getMockData() // change this to getApiData

}

export default get

The application is now broken, and you'll need to modify all of the props to make it function again. It could be acceptable in a one-page application, and you could do it, but picture having to do it in a ten- or twenty-page application: it won't be pleasant at all. You'll learn nothing and possibly introduce some bugs. The function pattern comes very handily in this situation.

  • Using the Constructor Pattern, create a model for our user data.

The function pattern is frequently the first design pattern I teach new developers. It's a wonderful introduction to design patterns because it's directly applicable and doesn't rely on abstraction. It's simple to grasp, can be done on the front end, and is also simple to utilize.

When I started learning Java and later Php, I first heard about it. You may have already learned this notion if you are familiar with these languages. Do you know what the names POJO and POPO mean? POJO and POPO stand for Plain Old Java Object and Plain Old PHP Object, respectively. We also refer to them as Entities. We can use them to encapsulate and store data most of the time.

Here's an example of how the POPO interface can be used to generate an object's plan:

interface UserInterface {
  public function getName();
  public function setName($name);
}
class User implements UserInterface {
  public $firstName;
  public $lastName;
  public function getName()
  {
      // ...
  }
  public function setName($value) {
      // ...
  }
}

Things aren't always the same in JavaScript as they are in other programming languages. This is due to the fact that JavaScript is a prototypical object-oriented language rather than a class-based language, and it is also not a completely object-oriented language. Enumerations and interfaces, for example, do not exist in JavaScript. For example, we can imitate enumerations with Object. freeze, but this is not the same as using the enum keyword.

Returning to the function pattern, there are two ways to implement it: using a function or a class/prototype. Because you construct a prototype when you use the class keyword in the back, the terms class and prototype are interchangeable.

Here's an example with a class:

class User {

constructor(firstName, lastName, age) {

  this._firstName = firstName

  this._lastName = lastName

  this._age = age

}
get firstName() {

  return this._firstName

}
get lastName() {

  return this._lastName
}
get age() {

  return this._age
}
displayUserInfo() {

  console.log(`Here are the information I have on this user: ${this._firstName}, ${this._lastName}, ${this._age}`)
}
}
const MyFirstUser = new User('Thomas', 'Dimnet', 33)

const MySecondUser = new User('Alexandra', 'Corbelli', 30)

MyFirstUser.displayUserInfo()

MySecondUser.displayUserInfo() 

And below it with a function:

function User(firstName, lastName, age) {

this._firstName = firstName

this._lastName = lastName

this._age = age
this.firstName = function() {
  return this._firstName

}

this.lastName = function() {

  return this._lastName

}
this.age = function() {

  return this._age

}
this.displayUserInfo = function() {

  console.log(`Here are the information I have on this user: ${this._firstName}, ${this._lastName}, ${this._age}`)

}

}

const MyFirstUser = new User('Thomas', 'Dimnet', 33)

const MySecondUser = new User('Alexandra', 'Corbelli', 30)

MyFirstUser.displayUserInfo()

MySecondUser.displayUserInfo()

I like to work with the class keyword most of the time since I believe it is more legible and clear. We already know what the class's getters and setters are after a closer look. Feel free to use any of these, but I'll be using the class version for the rest of the blog.

One of my favourite aspects of the function Object pattern is its ability to store both raw and parsed data. Assume you receive a timestamp date from an API and need to display it in two formats: "YYYY-MM-DD" and "DD-MM-YYYYY." Here's an example of how you can use the function Object pattern.

import moment from "moment"

class Movie {
   constructor(date) {

       this._date = date

   }

   get date() {

       return this._date
   }

   get dateV1 {
       return moment(this._date).format("YYYY-MM-DD")
   }

   get dateV2 {
       return moment(this._date).format("DD-MM-YYYY")

   }

}

By the way, the function pattern isn't just for formatting objects: you can use it for any type of object creation. Many jQuery effects, for example, employ the function Object pattern.

Before we begin implementing the solution, keep in mind the main disadvantage of this pattern: it can be memory-intensive. Although our computers and phones now have a lot of memory, it's always important to remember that our software and applications need to be optimized. I recommend that you only use functions when they are required.

Change from mocked data to API data without a hitch.

You can now switch to the with-constructor-pattern branch from the current one. There are two function Object patterns in this branch: src/models/MockedUser.js and src/models/ApiUser.js. The first employs hard-coded JSON data, while the second employs data from the Random User Generator API. The data displayed is currently coming from a JSON file.

The MockedUser object looks like this:

import moment from "moment"

class User {

   constructor(data) {

       this._id = data.id

       this._firstName = data.first_name

       this._lastName = data.last_name

       this._email = data.email

       this._picture = data.picture

       this._registeredDate = data.registered_date

   }
   get id() {

       return this._id
   }
   get fullName() {

       return `${this._firstName} ${this._lastName}`
   }
   get email() {
       return this._email
   }
   get picture() {
       return this._picture
   }
   get registeredDate() {

       return moment(this._registeredDate).format('MM/DD/YY')

   }

}

export default User

This is a simple JavaScript class that contains all of the user's necessary properties: an email, a photo, and a registration date. Rather than using raw JSON data, we now use this template throughout our code. It provides us with a single source of truth. If we want to add a new property or if the data changes, such as the last login, we can do so here.

Despite the addition of these two new objects, the only change to the code is in src/pages/Home/index.js:

This is a simple JavaScript class that contains all of the user's necessary properties: an email, a photo, and a registration date. Rather than using raw JSON data, we now use this template throughout our code. It provides us with a single source of truth. If we want to add a new property or if the data changes, such as the last login, we can do so here.

Despite the addition of these two new objects, the only change to the code is in src/pages/Home/index.js:

{
 data
   .map(user => new MockedUser(user)) // this is where we do the change
   .map(user => )
}

We now use the data stored in the MockedUser object instead of the raw JSON data. Assume we want to use the data from the RandomUser API. We only need to make two changes. To begin, modify the get function in

src/services/Api/index.js. We'll now use actual data.

async function get() {
   return await getApiData() // Instead of getMockData
}

Then, in your browser, replace the MockedUser function with the ApiUser function :

With only two changes, you can now use API data instead of JSON data and keep the project running! Furthermore, you understand what properties are required for user data and can easily add new ones or modify existing ones! I hope you enjoy this blog about JavaScript design patterns. If you're reading it for the first time, I'm delighted to have been your guide. Please feel free to ask any questions you may have. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile:
www.linkedin.com/in/johnsontaugustine/

Apple is well-known for its ease of use. With the introduction of the new Audio Graphs feature in iOS 15, iOS became even more accessible for visually impaired users.

This blog will teach you everything you need to know to start using Audio Graphs in your iOS app. We'll go over what Audio Graphs are and how we can incorporate them into our apps, as well as how to define axes and data points.

By the end of this blog, you'll be prepared to make your app more accessible than ever before and assist more people in using it.

According to Apple's session Bring accessibility to charts in your app.

What are Audio Graphs?

We will not look at how to create and use graphs in this blog. There are numerous effective methods for presenting charts and graphs to users, ranging from simple self-build solutions to ready-to-use frameworks such as charts. Instead, we'll work with a sample set of data points and concentrate on adding the Audio Graphs feature.

The displayed page also includes additional information about the data series, such as a summary, features, and statistics. All of this information will be read aloud to the user by VoiceOver.

Implementing Audio Graphs

The first step is to make the view controller displaying the chart conform to the AXChart protocol. This protocol is defined as follows: There is only one requirement: the property varies accessibility.

AXChartDescriptor, A descriptor of this type contains all of the information required to display the Audio Graph page, such as the title, summary, and series. A chart descriptor is made up of additional descriptors for the axes and data points. Let's take a closer look at these classes before combining them to make an AXChartDescriptor.

Describing the Axes

AXNumericDataAxisDescriptor and AXCategoricalDataAxisDescriptor are the two types of axis descriptors. Both implement the same AXDataAxisDescriptor basis protocol, which cannot be used directly. Both types of descriptors can be used on the x-axis. However, only a numeric descriptor can be used to define the y axis. This makes sense because the graph's points can only be numbers, whereas the x values can be both points and categories. Let's begin by making an x-axis, which can be done as follows:

private var xAxis: AXNumericDataAxisDescriptor {

   // 1

   AXNumericDataAxisDescriptor(

       title: "The x axis",

       // 2

       range: (0...9),

       // 3

       gridlinePositions: [],

       // 4

       valueDescriptionProvider: { (value: Double) -> String in

           "\(value)"
      }
   )
}
  • For the time being, we'll create a numerical axis with an example title. For a real-world app, you should use a more descriptive title, as this is what VoiceOver will read.
  • An axis must also be aware of its range. We'll create 10 data points in a later step, so the range in this example is (0...9). When creating your points based on real-world data, you can specify the number of values to display in the graph.
  • Finally, we can pass in an array of points to display grid lines. However, regardless of the values entered, this appears to have no effect on the created Audio Graph detail page.

Please let me know if you have any additional information about this property in the comments!

  • Finally, an axis must understand how to convert data points into strings that can be read by the user. This is accomplished by providing a closure that converts a Double value to a String. We just embed the value in a string in this case, but it could also be used for matters or other transformations.

The y axis is created in the same manner as the x-axis. It also requires a title, range, and gridline. Positions and with DescriptionProvider:

private var yAxis: AXNumericDataAxisDescriptor {

   AXNumericDataAxisDescriptor(

       title: "The y axis",

       range: (0...9),

       gridlinePositions: [],

       valueDescriptionProvider: { (value: Double) -> String in

           "\(value)"

       }
   )
    )

Describing the Data Points

The graph points are encapsulated in an AXDataSeriesDescriptor, which represents a single data series. An Audio Graph can have multiple data series, but for the time being, we'll only use one. An AXDataSeriesDescriptor is made up of a name, a boolean flag indicating whether or not the data series is continuous, and an array of AXDataPoint objects representing the actual points. A point has an x-axis value called xValue at all times. The y axis value, yValue, is optional. A point can also have a label to give the data point a name, as well as additionalValues, which can be numerical or categorical values for this data point. Given some example values, here's how to make an AXDataSeriesDesciptor:

private var series: [AXDataSeriesDescriptor] {

   // 1

   let yValuesSeries = [4.0, 5.0, 6.0, 3.0, 2.0, 1.0, 1.0, 3.0, 6.0, 9.0]

   let dataPointsSeries = yValuesSeries.enumerated().map { index, yValue in

       AXDataPoint(x: Double(index), y: yValue)
   }

   // 2

   return [

       AXDataSeriesDescriptor(

           name: "Data Series 1",

           isContinuous: true,

           dataPoints: dataPointsSeries

       )
   ]
}
  • For the y axis, we create data points with an array of values. The value of the x-axis corresponds to the index of a number in the array.

  • We then use the array of AXDataPoint objects we just created to wrap them in an AXDataSeriesDescriptor. We use true for isContinuous to display a single coherent graph and an empty string as the name for this data series.

  • To display all points separately, use false for isContinuous. Check it out for yourself or wait for the next section, where we'll go over more options in depth.

Putting all Descriptors together

We've made two descriptors for the axes and one for a data series. We are now ready to combine them to form one AXChartDescriptor. Here's how we can go about it:

// 1
var accessibilityChartDescriptor: AXChartDescriptor? {

   // 2

   get {

       AXChartDescriptor(

           title: "Example Graph",

           summary: "This graph shows example data.",

           xAxis: xAxis,

           yAxis: yAxis,

           series: series

       )

   }

   // 3
   set { }
}
  • As previously stated, in order to implement the AXChart protocol, we must provide the property accessibility ChartDescriptor of type AXChartDescriptor.
  • To do so, we specify a title and a summary that will be displayed and read to the user on the Audio Graphs detail page. We also pass in the axis and data series descriptors that we created earlier.
  • We leave the setter empty because this property will never be set from anywhere else.

Using Audio Graphs

Let's take a look at our audio graph in action. It can be intimidating to use VoiceOver if you are not used to it. A double or triple tap on the iPhone's back is the best way to enable or disable it. Scroll down to Back Tap in Settings > Accessibility > Touch. A variety of actions can be defined here to be triggered by a double or triple back tap.

Next, launch your app and navigate to the graph for which you have enabled the Audio Graph feature. Enable VoiceOver and swipe until the graph is selected.

If you're not sure how to use VoiceOver, you can consult Apple's VoiceOver gesture guide or raywenderlich.com's iOS Accessibility: Getting Started.

Let's take a look at our audio graph in action.

It can be intimidating to use VoiceOver if you are not used to it. A double or triple tap on the iPhone's back is the best way to enable or disable it. Scroll down to Back Tap in Settings > Accessibility > Touch. A variety of actions can be defined here to be triggered by a double or triple back tap.

Next, launch your app and navigate to the graph for which you have enabled the Audio Graph feature. Enable VoiceOver and swipe until the graph is selected.

If you're not sure how to use VoiceOver, you can consult Apple's VoiceOver gesture guide or raywenderlich.com's iOS Accessibility: Getting Started.

Open the Audio Graph detail page — this is the result of our efforts!

Swipe right until the Play button appears, then double-tap to listen to the Audio Graph. It's easy to see (or hear) why this new feature improves graph accessibility for visually impaired users so much.

Where To Go From Here

As demonstrated in this tutorial, Apple made it very simple to add audio representations to existing graphs. All you have to do is wrap your data points in an AxDataSeriesDescriptor and add some metadata.

In the following section, we'll look at how adaptable they are. We'll go over various types of axes and show more than one data series. This section will be published next week, so stay tuned for more information!

Audio Graphs can help you make your apps more accessible to a wider audience. This will provide your users with a better experience.

If you have questions or remarks, please let us know in the given email below. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

Equatable, Comparable, Identifiable, and Hashable solutions

Protocols are not new to iOS or its cousin OS X; in fact, the delegate protocol is the bread and butter of more than half of the frameworks, though this may change in the coming years with the introduction of async/await. Having said that, since SwiftUI's release in 2019, protocols appear to be changing their colors in the meantime.

That is because SwiftUI includes a number of mandatory protocols that are linked to the language itself. Although it is not always clear what is going on, basic protocols such as Equatable, Comparable, Identifiable, and Hashable are used.

Identifiable

This is the first protocol you'll likely encounter as a new SwiftUI coder when attempting to define a ForEach loop, for example, within a List — assuming we have an array of dice containing a custom struct.

struct ContentView: View {
 @State var dice = [Dice]()
 var body: some View {
   ForEach(dice) {
     Text(String($0.value))
   }
 }
}

The compiler is looking for a way to uniquely identify each row within your struct's loop. The dice variable shown here must be Identifiable. Conformance is obtained through the use of code such as this.

struct Dice: Identifiable {
 //  var id = UUID()
 var id = Date().timeIntervalSince1970 // epoch [dies Jan 19, 2038]
 var value: Int!
}

Hashable

The second protocol you're likely to encounter is Hashable, which SwiftUI requires for loops like the one shown here.

ForEach(dice, id: \.self) { die in
 Text("Die: \(die.value)")
}

But be careful, because including a third protocol Equatable with a definition shown will cause your code to crash.

struct Dice: Equatable, Hashable {
 var id = UUID()
 var value: Int!
 static func ==(lhs: Dice, rhs: Dice) -> Bool {
   lhs.value == rhs.value
 }
}

The hashable needs here necessitate the use of a unique identifier, similar to the identifiable protocol.

To use both the Hashable and the Equatable protocols, you must instruct the Hashable protocol to focus on the id, which is, of course, that unique Identifiable value.

extension Dice: Hashable {
 static func ==(lhs: Dice, rhs: Dice) -> Bool {
   lhs.id == rhs.id
 }
 func hash(into hasher: inout Hasher) {
   hasher.combine(id)
 }
}

However, the hash function can also be useful in this case because it guarantees that it will produce the same output given the same input. Although this example may appear to be a little pointless, you can use code like this to generate the same key repeatedly.

.onAppear {
 var hash = Hasher()
 hash.combine(die.id)
 print("hash \(hash.finalize()) \(die.hashValue)")
}

The main page at Apple provides a more real-world example of how this protocol can be used.

Comparable

Comparable, which appears to be nearly identical to Equatable, is the next protocol on my shortlist.

extension Dice: Comparable {
 static func < (lhs: Dice, rhs: Dice) -> Bool {
   lhs.value < rhs.value
 }
}

This code was added to our SwiftUI interface to enable us to use the new protocol/property.

if dice.count == 2 {
 if dice.first! > dice.last! {
   Text("Winner 1st")
 } else {
   Text("Winner 2nd")
 }
}

However, there is a catch. I can't use the == in the same way because I had to point to the id to conform to the Hashable protocol.

if dice.first! == dice.last! {
 Text("Unequal \(dice.hashValue)")
} else {
 Text("Equal \(dice.hashValue)")
}

To get around/fix this, I'll need to hire a new operator. The fix for the above necessitates the creation of a new infix operator, such as ==== Obviously, I'd need to change the code snippet above to use the ==== instead of the == shown.

infix operator ==== : DefaultPrecedence
extension Dice {
 static func ====(lhs: Dice, rhs: Dice) -> Bool {
   lhs.face == rhs.face
 }
}

I'm sure Apple would prefer that you use protocols in your everyday code to make it clear what you're trying to accomplish, essentially an extension of types that you can use on your custom objects.

Equatable

Okay, I admit that the infix operator isn't for everyone, especially Swift purists. So here's an alternative that's more equitable and doesn't require an infix. Within it, I define the view as adhering to the equatable protocol in order to target the face of my die.

Please take note of what I used.

onAppear to initialize die1 and die2, and then.onChange to handle all subsequent reloads of the dice whenever I rolled a new pair.

struct EqualView: View, Equatable {

 static func == (lhs: EqualView, rhs: EqualView) -> Bool {
   lhs.die1?.face == rhs.die2?.face
 }
 @State var die1:Dice? = nil
 @State var die2:Dice? = nil
 @Binding var dice:[Dice]
 var body: some View {
   Color.clear
     .frame(width: 0, height: 0, alignment: .center)
     .onAppear {
       die1 = dice.first!
       die2 = dice.last!
     }
     .onChange(of: dice) { values in
       die1 = dice.first!
       die2 = dice.last!
     }
   if die1?.face == die2?.face {
     Text("Equal ")
   } else {
     Text("Unequal ")
   }
 }
}

This is all about the swift protocols that are commonly used, hope you all understand this topic. If you have any doubt about the Swift protocols used in Swift UI. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/

How to Use Git in Android Studio?

- Posted in Git by

Git should be integrated into the project.

Check to see if Git is set up.

Navigate to Android Studio > Preferences > Version Control > Git. To ensure that Git is properly configured in Android Studio, click Test.

Allow integration of version control

Assume you've just started a new Android project called MyApplication. Go to VCS > Enable Version Control Integration in Android Studio. If it has previously been integrated with a version control system, this option will be hidden.

Then, as the version control system, select Git.

A default local master branch will be created if VCS is successfully enabled.

To exclude files from Git, add. gitignore.

Two. gitignore files are automatically added when you create a new Android project in Android Studio (one in the project root folder, and the app folder). Git should not contain files such as generated code, binary files (executables, APKs), or local configuration files. Version control should be disabled for those files. Here is the content of my first. gitignore file:

# content of .gitignore
*.iml
.gradle
/local.properties
/.idea/*
.DS_Store
/build
/captures
.externalNativeBuild   
.cxx

Changes are staged and committed

The project is complete and ready for use with Git version control. Go to VCS > Commit to stage and commit your changes.

You will be presented with a dialogue in which you can examine all files that will be added, enter commit messages, and commit. You can uncheck any files that you do not want to be part of this commit.

When you click commit, a popup alerts you that you haven't yet configured your username or email address. Because they will be attached to your commit message, you should always configure them.

"Set properties globally" is an option. I recommend that you do not check this because doing so will result in every git project on your local machine having the same username/email. You may want to have separate usernames/emails for side projects and company projects.

All done — the entire project has now been pushed to Git.

Configure Remote Connections

Go to VCS > Git > Remote to add the project to the remote repository.

To add a new remote, click "+," then enter your remote URL in the URL box. Your local project is now linked to your remote Github repository. You can use Bitbucket, Gitlab, or any other repository in addition to Github. Changes are being pushed to the remote. Go to VCS > Git > Push to push your local changes to the remote repository. The "Push Commits" popup shows which commit will be pushed to the remote-tracking branch. You may proceed with the push.

Obtain the Changes from the Remote

To download the most recent remote changes, navigate to VSC > Git > Pull.

The popup "Pull Changes" appears. I won't go into detail about the pull strategy; simply use the default> strategy and perform the pull.

Collaborate with Branches

Some consider Git's branching model to be its defining feature, and it undoubtedly distinguishes Git in the VCS community. In this section, I'll show you how to use branches in Android Studio.

Make a new branch.

Navigate to VCS > Git > Branches.

The phrase "Git Branches" appears. It displays all of the local and remote branches, as well as the "New branch" option.

Click "New Branch" and give it the name "feature branch."

The other branching possibilities

Assume you're standing near the feature branch. When you expand the menu by clicking on the master branch, you will see many options:

Let me explain each of them in turn:

Checkout: The master branch.

Checkout As: check out a new branch from master. Checkout master and rebase feature branch onto it.

Compare with current: commits that exist in master but not in feature, and vice versa.

Show Diff with Working Tree: Display the difference between the master and the current working tree.

Checkout and Rebase onto Current: Current should be rebased on

Rebase Current onto Selected: Rebase master on the feature has been chosen.

Merge into Current: combine the master and a feature.

Rename: change the name of the master branch.

Delete: remove the master branch from the tree.

You will select the best option based on your requirements.

Display Log History

Select VCS > Git > Show History from the menu.

The history of the currently open file will be displayed in Android Studio,

You can view the entire log history by clicking on the "Log" tab.

You can filter the history here by branch, user, and date, making it easier to find the commit you're looking for.

If you have any doubts about how to use Git in android studio. Don’t hesitate to contact us. Airo Global Software will be your digital partner.

E-mail id: [email protected]

enter image description here

Author - Johnson Augustine
Chief Technical Director and Programmer
Founder: Airo Global Software Inc
LinkedIn Profile: www.linkedin.com/in/johnsontaugustine/