The Global Mobile Trends 2017 Report  confirms that three quarters of the population is now connected to the Internet via mobile phones. The Internet of things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect, collect and exchange data . The IoT concept was first introduced by Kevin Ashton, co-founder of the Auto-ID Center at MIT when he wanted to present the idea of radio frequency ID (RFID) to Procter & Gamble (P&G) in 1999 .
In the Open Data concept, certain organizational data are freely available for anyone to use and republish as they wish, without any restrictions (copyrights, patents, etc.). This is similar to concepts such as Open Source, Open Hardware, Open Government, and Open Knowledge.
M ost third-party applications have restricted access through the public Internet due to an organization’s security boundaries.
If applications are hosted in AWS, creating a site-to-site VPN can provide access to these kind of networks.
It is interesting to see how intelligent components come in to play in Dynamics 365. Let’s have a look at Relationship Insights, which is a key set of features within the Embedded Intelligence suite. The suite analyses Dynamics 365 data as well as the Microsoft Exchange database to produce constructive insights to understand relationships through behavior of relationships.
Securing an enterprise application is a key aspect of modern enterprise application development. There are plenty of mechanisms to secure such applications, especially in distributed environments. Identity and Access Management (IAM) system is one of those very popular security components in a modern day distributed application.
DevOps is the combination of cultural philosophies, practices, and tools that increase an organization’s ability to deliver applications and services at high velocity; evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes . It is a software development method, which can bring software development and operational activities together.
Spark is considered as one of the data processing engines which is preferable, for usage in a vast range of situations. Data Scientists and application developers integrate Spark into their own implementations in order to transform, analyze and query data at a larger scale.
Loading a large number of images asynchronously in a scrollable view like
UITableView or UICollectionView can be a common task. However, keeping the app responsive in
terms of scrolling while images are being downloaded can be a bit of a challenge. In worst cases, we have
also experienced app crashes.
Amazon Redshift is a fast, fully managed, petabyte-scaled data warehouse solution, that uses columnar storage
to minimize Input/Output (I/O), provide high data compression rates, and offer fast performance. As a typical
Data Warehouse, it is primarily designed for Online Analytic Processing (OLAP) and Business Intelligence
(BI) and not designed to use as an Online Transaction Processing (OLTP) tool. It supports Ansi-SQL and
is a massively parallel processing database.
One of the leading independent investment management company was seeking to implement a cloud-based Enterprise
Data Lake (EDL), Enterprise Data Warehouse (EDW) and an Enterprise Data Pipeline (EDP) that leverage both
the AWS services as well as other complimentary open source tools in the market.
The Project is an AWS based solution to provide a data integration platform that can accelerate digital analytics
capture of the customer journey and identify some of the buying/cancellation patterns using Machine Learning
(ML) approaches for one of the world’s most widely recognized cruise brands.
The client is a top American company specialized in the use of marketing to sell home care, health and beauty
products. The company’s global data platform has a legacy data warehouse to store various marketing data
for generating BI reports.
Data lake is a single platform which is made up of, a combination of data governance, analytics and storage.
It’s a secure, durable and centralized cloud-based storage platform that lets you to ingest and store,
structured and unstructured data. It also allows us to make necessary transformations on the raw data assets
as needed. A comprehensive portfolio of data exploration, reporting, analytics, machine learning, and visualization
on the data can be done by utilizing this data lake architecture.
Machine Learning is considered as the execution of utilizing the existing algorithms, in order to inject
data, grasp from it, and then make a resolution or forecast about something. So rather than developing
software procedures with a certain set of directives to achieve a specific task, the machine is instructed
using huge amounts of data and algorithms that provides it the capability to absorb, how to accomplish
Any organization which is serious about releasing iterations of bug free software, in a frequent manner should
have some level of DevOps processes in place in their delivery pipeline. The following post will discuss
how to implement a DevOps continuous delivery/deployment pipeline in the AWS Cloud infrastructure.
Machine learning often feels a lot harder than it should be to most developers because the process to build
and train models, and then deploy them into production is too complicated and too slow. First, you need
to collect and prepare your training data to discover which elements of your data set are important. Then,
you need to select which algorithm and framework you’ll use.
AWS Kinesis Firehose is a fully managed service for transforming and delivering streaming data to a given
destination. A Destination can be a S3 bucket, Redshift cluster, Splunk or Elasticsearch Service. In the
following tutorial I’ll walk you through the process of streaming CloudWatch Logs to a S3 bucket generated
by an AWS Lambda function.
According to NIST, “cloud computing is a model for enabling ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and
services) that can be rapidly provisioned and released with minimal management effort or service provider
interaction (NIST, 2011The traditional approach incur a huge capital expenditure upfront along with too
much excess capacity not allowing to predict the capacity based on the market demand.
So you have written some tests for your project and now you are waiting for the test run to be completed
to see if anything breaks due to the changes you have made by your last commit. Finally you can merge your
changes to develop when everything seems green on your CI. But when the number of tests increases, their
execution time will also increase.