Blog

When is the best time to go to bed and how long should I sleep?

Since our sleep cycle is 90 minutes that includes both non-REM and REM sleep.

Some of the more details about the different stages of sleep are as follows:-

  • Non-REM is deep and restorative sleep while REM is lighter and dream-infused sleep
  • The shift from non-REM to REM happens at certain time of the night regardless of when you go to sleep
  • Suppose you go to bed at 1 am then you will directly go into the REM cycle (lighter sleep mode) and the reduction in non-REM (deep sleep) will leave you groggy and blunt-minded in the next day
  • It also harms your circadian rhythms and results in making you vulnerable to dreadful diseases
  • So sleep in-between 9 PM to 12 AM when the body has the opportunity to get maximum from the non-REM phase and for adults, the average sleep time is 7–8 hours a day
  • Have a proper breakfast and exercise to keep you feel active and energize throughout the day.
Advertisements

What is the actual cause of sleep paralysis?

It is defined as the inability to perform voluntary movements or speak at sleep onset or on awakening. Sleep paralysis represents a dissociated state in which REM sleep atonia coexists with full consideration of wakefulness.

It usually occurs at one of the two times. Hypnagogic or predormital sleep paralysis occurs while falling asleepHypnopompic or postdormital sleep paralysis occurs while waking up from sleep.

Some of the reasons for which it occurs:-

  • Lack of sleep
  • Hampering the sleep schedule
  • Taking Mental strain
  • Sleeping on the back
  • Having other sleep disorder like narcolepsy or leg cramps during sleep
  • Medication for ADHD (Attention deficit hyperactivity disorder) causes sleep paralysis
  • Addiction to drugs and alcohol also causes sleep paralysis.

Blue light and sleep

Scientific studies have pinpointed blue light as a form of light that’s especially aggressive in triggering sleeplessness. Blue light suppresses melatonin production for more than twice as long as other light wavelengths and alters circadian rhythms by twice the degree. Interference with the body’s 24-hour circadian rhythms can have a significant effect on health, creating problems with the cardiovascular, metabolic, and immune systems, disturbing mood, and compromising cognitive function. When your circadian rhythms are out of whack, you think, feel, and perform below your best—and over time, your health can be put at risk.

New research adds to this already large body of evidence of the power of blue light to interfere with sleep. The study of healthy young adults found exposure to blue light from computer screens between the hours of 9-11 p.m.:
1. shortened their total sleep time
2. significantly suppressed melatonin production
3. diminished sleep quality, by increasing the frequency of nighttime awakenings

Researchers also found blue light prevented body temperature from dropping during the night. A gradually lowering body temperature is one key element of the body’s progression into sleep. That blue light kept body temperature elevated to daytime levels is a sign of the degree to which nighttime blue light exposure can disrupt normal circadian rhythms. After nights of blue-light exposure, participants were more tired during the day and experienced more negative moods.

The study compared the effects of blue light and red light, a longer wavelength light. Scientists found red light exposure during the same two-hour evening time period did not interfere with sleep and circadian biology. Body temperature lowered, and sleep progressed as normal.

The takeaway? Nighttime blue light exposure is indeed harmful to sleep and circadian rhythms. And taking steps to manage blue light exposure—including using red light sources during evening hours—can make a real difference.

When blue light is beneficial
While a hazard to health and sleep at night, blue light exposure can be helpful during the day—especially in the morning and early afternoon. Research shows exposure to blue-light during daytime hours can be beneficial in several ways, including:
• Reducing daytime sleepiness
• Speeding reaction times
• Elevating alertness
• Strengthening attention span

Research suggests we don’t need prolonged exposure to blue light to achieve its benefits. A study found 30 minutes of blue-light exposure in the morning led to better working memory performance and faster reaction times, compared to other light exposure.

Part of managing light exposure in today’s world is understanding how light can be used to enhance performance and support good health and sleep.

Ways to regulate blue light exposure
New scientific advancements and technology are helping to provide more ways than ever to tailor, target, and manage light exposure, to reduce health hazards and also to take advantage of the benefits of well-timed light exposure to health and performance.

Carotenoid supplements. Research suggests carotenoid supplements may help strengthen the eye’s natural ability to block blue light. The eye has its own blue-light shield—it’s called the retinal pigment epithelium, a thin layer of cells near the retina. This epithelial layer protects the retina against macular degeneration and acts as a filter for blue-wavelength light. The cells of this layer contain carotenoids, which we absorb through our diet. Research indicates carotenoids are effective at absorbing blue-wavelength light. Carotenoids including lutein and zeaxanthin are available in supplement form and may help protect the eye and also may help protect against the unwanted stimulating effects of blue light exposure at the wrong times.

Blue light filtering software and apps. There are a number of apps that work to reduce blue light exposure during evening hours. Many smartphones and tablets include these blue-light filtering apps as part of their operating systems. Apple’s Night Shift is a built-in iOS app, that can be scheduled to shift to warmer, redder wavelength light in the evenings and back to bright, more highly blue-wavelength light in the morning. (Head’s up, iPhone users: the recent software upgrade to iOS 11 has changed the location of the Night Shift app in the phone’s control panel.) Flux is a free software that adjusts the light of your computer to match the cycle of natural sunlight where you live and adjust the light in the evenings to reduce brightness and blue light.

Blue-light blocking filters and glasses. Both filters for screens and blue-light blocking eyewear are available, to reduce unwanted, poorly timed exposure.

Targeted, speciality light bulbs. One of the most effective ways to manage light exposure is to use LED light bulbs that provide the specific kind of light that’s best for day and night. Energy efficient LED light bulbs are now made with our circadian biology in mind, designed to minimize the negative effects of blue wavelength light at night—and take advantage of those stimulating effects during the day. In our household, we use Lighting Science’s Good Night® bulbs in the bedrooms throughout our house. We also use their Good Day® bulbs in other places around the house (in my kids’ bathrooms!). These daytime bulbs are designed to help stimulate alertness and boost focus. I use them in my office, and they’re great in the family bathroom, to give adults—and kids—to get that performance-boosting dose of light first thing in the morning.

It’s not only technology but everyday habits that make a difference between light exposure that’s healthful and harmful. Remember to:

Get plenty of light exposure throughout the day. Light exposure during the day boosts attention and alertness, improves mood and cognitive function, strengthens circadian rhythms and can help you sleep better at night. Spending 10 or 15 minutes in the sunlight during the day—first thing in the morning, or on a break at lunch—is a healthful, nourishing light routine.

Keep screens away from your face at night. It’s one thing to relax in front of the television for a while during the evening, and quite another to have your head buried in your smartphone right up until lights out. The degree and intensity of artificial and blue-wavelength light exposure matter. As part of your Power Down Hour™, give yourself a mobile device cut-off time. That’s when you’ll stow your phone for charging—somewhere other than your bedside table. The closer you get to bedtime, the less interactive your media consumption should be. Studies show that social media and other highly interactive forms of media (think: video games and app games) are especially disruptive to sleep.

Modern light exposure requires modern solutions and strategies. Pay attention to your “light diet,” and use the help that’s available to make the light in your life work on behalf of your health and well-being.

Why Do We Sleep?

Our bodies regulate sleep in much the same way that they regulate eating, drinking, and breathing. This suggests that sleep serves a similarly critical role in our health and well-being. Although it ‘s hard to answer the question, “Why do we sleep?” Scientists have developed several theories that together may help explain why we spend a third of our lives sleeping. Understanding these theories can help deepen our appreciation of the function of sleep in our lives.

Hunger and Eating ~ Sleepiness and Sleep
While we may not often think about why we sleep, most of us acknowledge at some level that sleep makes us feel better. We feel more alert, more energetic, happier, and better able to function following a good night of sleep. Sleep makes us feel better and going without sleep makes us feel worse.
One way to think about the function of sleep is to compare it to another of our life-sustaining activities: eating. Hunger is a protective mechanism that has evolved to ensure that we consume the nutrients our bodies require to grow, repair tissues, and function properly. And although it is relatively easy to grasp the role that eating serves— given that it involves physically consuming the substances our bodies need—eating and sleeping are not as different as they might seem.

An Unanswerable Question?
Scientists have explored the question of why we sleep from many different angles. They have examined, for example, what happens when humans or other animals are deprived of sleep. Yet, despite decades of research and many discoveries about other aspects of sleep, the question of why we sleep has been difficult to answer.

Theories of Why We Sleep-

  • Inactivity Theory
    One of the earliest theories of sleep sometimes called the adaptive or evolutionary theory, suggests that inactivity at night is an adaptation that served a survival function by keeping organisms out of harm’s way at times when they would be particularly vulnerable.
  • Energy Conservation Theory
    The energy conservation theory suggests that the primary function of sleep is to reduce an individual’s energy demand and expenditure during part of the day or night, especially at times when it is least efficient to search for food.
  • Restorative Theories
    Another explanation for why we sleep is based on the long-held belief that sleep in some way serves to “restore” what is lost in the body while we are awake. Sleep provides an opportunity for the body to repair and rejuvenate itself.

 

 

A guide to Sub-1 GHz Long-range Communication and SmartPhone Connection for IoT application to lowpower RF connectivity

In today’s Internet of Things (IoT) world, there is a multitude of new wireless connectivity applications entering the market each day, propelling the continuous gathering of sensors and interactions. From our smart phone telling us how many steps we have taken to our security system telling us that no windows are left open, we have a safety net of reminders helping us effortlessly move throughout our day. This trend of gathering more information creates daily interactions with different wireless devices. Within one day a person will interface with over 100 connected things using multiple wireless protocols or standards. As of now, there is very little overlap as you connect from your home security system to your car to your office. The interface is a bit awkward as you switch from wireless bands and separate networks, so how do you encourage more interaction between these networks? What is often missing is the seamless interaction from 2.4 GHz to Sub-1 GHz.

Sub-1 GHz:  Long-range and low power RF connectivity

For a lot of wireless products, the range is much more important than being able to send high throughput data. Take smart metering, for example, or a sensor device in an alarm system, or a temperature sensor in a home automation system. For these applications, the Sub-1 GHz industrial scientific and medical (ISM) bands (433/868/915 MHz) offer much better range than a solution using the 2.4GHz band. The main reason for this is the physical property of the lower frequency. Given the same antenna performance, this theory (free space) calls for twice the range when using half the RF frequency. Another important factor is that the longer RF waves have an ability to pass through walls and bend around corners. The lower data rate will also play a part since the sensitivity of the receiver is a strong function of the data rate. As a rule of thumb, a reduction of the data rate by a factor of four will double the range (free space). Lastly, due to the low-duty cycle allowed in the  Sub-1 GHz RF regulations, there are fewer issues with disturbances for low-data-rate solutions in the Sub-1 GHz bands than the 2.4-GHz band (mainly due to Wi-Fi®).

1

The lower frequency also helps to keep the current consumption low. In addition to offering higher battery life, the lower peak current consumption also enables a smaller form factor solution using coin cell batteries. However, getting the data from the Sub-1 GHz system into your smart device can be challenging, mostly due to the fact that smart devices do not typically include Sub-1 GHz communication systems for use with ISM band communication. For this reason, Bluetooth® low energy is the de-facto standard to use, which is where a dual-band wireless microcontroller (MCU) can act as a bridge between the two communication bands. With the SimpleLink™ dual-band CC1350 wireless MCU combining Sub-1 GHz and Bluetooth low energy is now possible. The CC1350 device is able to transmit +10 dBm using only 15mA, which is perfectly okay to handle for a coin cell battery. Using low data rates—it is possible to transmit over 20 km (line of sight from an elevated transmitter) with the RF receiver consumption being only 5.4 mA using a 3.6-V lithium battery.

 

Challenges with the Sub-1 GHz bands

It is easy to appreciate the range and low power using the Sub-1 GHz band, but naturally, there are also some drawbacks. As described earlier, one of the main tools used in our daily life, the smartphone, does not use Sub-1 GHz. Or actually, it does, it is using the licensed bands (GPRS, 3G, and LTE) to get the best range, but it is not using the Sub-1 GHz ISM bands. The fact that both Wi-Fi and Bluetooth are standard features of any smartphone available on the market today offers a clear advantage for those technologies. An obvious solution to this is to combine the best of two worlds—Sub-1 GHz technology for long range and low power and a 2.4-GHz solution using Bluetooth low energy for a smartphone/tablet/PC connection. The first RF IC publicly available on the market that can do this is the CC1350 wireless MCU from Texas Instruments (TI). The CC1350 device is a single-chip solution that includes a high-efficiency ARM® Cortex®-M3 MCU, a low-power sensor controller, and a low-power dual-band RF transceiver.

 

SimpleLink dual-band CC1350 wireless MCU

The ARM Cortex-M3 application processor has 128 kB Flash, 20 kB ultra-low power SRAM in addition to 8 kB SRAM that is used for cache (can also be allocated as regular SRAM). The RF core contains an RF front-end capable of supporting the most relevant Sub-1 GHz bands (315, 433, 470, 868, 915 MHz) as well as 2.4 GHz. The radio core includes a The CC1350 wireless MCU  is a true single-chip solution offering ultra-small PCB footprint solutions, down to 4×4 mm (QFN). If more IOs are required, it is also offered in a 7×7 mm package (QFN) with 30 IOs.

very flexible software-configurable modem to cover data rates from a few hundred bits per second up to 4 Mbps and multiple modulation formats from “simple” OOK (on–off keying), to (G)FSK, (G)MSK, 4-(G)FSK and shaped 8-FSK. The main advantage with a very flexible radio core is to handle the wealth of existing legacy Sub-1 GHz solutions in the market today and also to support modifications to existing standards. One good example for this is that the CC1350 wireless MCU is able to handle, with only firmware upgrades, the new long-range mode, as well as the new high-speed mode that was announced by the Bluetooth SIG in June 2016 (Bluetooth 5.0).

 

The ARM Cortex-M0 in the RF core is running pre-programmed ROM functions to support both low-level Bluetooth and proprietary RF solutions. This greatly offloads time critical tasks from the main ARM Cortex-M3 application processor.

The power system tightly integrates a digital converter to digital converter (DC/DC) solution that is active in all modes of operation, including standby. This ensures low-power operation, as well as stable performance (RF range) despite the drop in battery voltage.

ROM in CC1350 wireless MCU

The SimpleLink CC1350 device contains over 200kB of ROM (Read Only Memory) with libraries covering the following functions:

  • TI-RTOS (real time operating system)
  • Low-level driver library (SPI, UART, etc.)
  • Security functions
  • Low level and some higher level, Bluetooth stack functions

Note that ROM code can be fixed/patched by functions in Flash or RAM.

Ultra-low current consumption

The SimpleLink CC1350 and CC1310 (Sub-1

GHz only) wireless MCUs offer ultra-low current consumption in all modes of the operation both for the RF as well as the microcontroller.


The sensor controller
2

The sensor controller is a native, small power-optimized 16-bit MCU that is included in the CC13xx devices to handle analog and digital sensors in a very low-power manner. It is programmed/configured using the Sensor Controller Studio where users find predefined functions for the different peripherals. The tool also offers software examples of common sensor solutions like ADC reading (streaming, logging window compare functions) and I2C/SPI for digital sensors. The sensor controller can also be used for capacitive touch buttons.

MCU, TI offered one of the first certified Bluetooth low energy software stacks. The stack has since been developed further to support the SimpleLink CC26xx platform that was released in 2015. This stack is now also available for the CC1350 device and has all the features that the Bluetooth 4.2 Figure 2: TI sensor controller studio

standard offers—from “simple” beacons to a fully connectable stack. All TI RF stacks are using TIRTOS, a free real-time operating system from TI. TI-RTOS is distributed under the 3-Clause BSD license, meaning that full source code is provided. To further reduce the complexity of developing applications and let customers solely focus on their application development, TI provides a large set of peripheral drivers, including a performance optimized RF driver. The TI-RTOS for CC13xx and CC26xx software development kits (SDK) offers a large set of getting started examples. The RF examples serve as a great starting point for developing proprietary systems, all software examples are provided for the purpose of showing a performance-optimized usage of the various drivers. For new product development, without the need to adhere to legacy products, a great solution is to use the new TI 15.4-Stack offering. TI 15.4-Stack is TI’s implementation of the IEEE 802.15.4g/e standards, enabling start type networks. It is offered (free of charge) in two versions:

  1. Version optimized for European RF regulations (ETSI)—using frequency agility and LBT (Listen before talk)
  2. Version optimized for US RF regulations (FCC)—using frequency hopping to enable highest output power

Sub-1 GHz and Bluetooth low energy use cases

The fact that the CC1350 wireless MCU enables both Sub-1 GHz and Bluetooth low energy in a single device opens up a lot of possibilities. Here are a few of them:

1.    Installation/commissioning, maintenance and diagnostic of a Sub-1 GHz network

During installation/commissioning, the long-range capabilities of Sub-1 GHz can be a drawback. During installation, you want only your selection of devices in the network to be connected together— not nodes from e.g., the neighbor that might have the same product installed. Using a smartphone with the shorter range (and also much higher data rate) using a Bluetooth connection and with a large display will make installing devices a lot easier. With the Internet-connected smartphone, it would also be easier to download new software for the node as well as collecting diagnostic information. Such a solution can be made to do it yourself as well as professionally installed products. Examples:

  • Let’s say you buy a two pack of pre-commissioned smoke detectors that are connected together with a Sub-1 GHz network, but then you find out that you need another device that you want to add to your network.
  • Another example would be a consumer or professional installation of intruder alarm systems or home automation.

Commissioning

Figure 3: Bluetooth software: Fully connectable stack  Sub-1 GHz software: TI 15.4-Stack or a legacy Sub-1 GHz solution.

2.    Firmware updates

In order to ensure the best performance over the complete lifetime of a connected product, it is critical to be able to offer over-the-air (OTA) firmware updates. Updating the firmware can also add new features to devices already deployed in the field.

Taking advantage of the higher data rates that Bluetooth low energy offers, firmware updates can be made much faster. A system can consist of devices that can be firmware updated both via the Sub-1 GHz link and the Bluetooth low energy link, offering great flexibility for the user. One example of when using Bluetooth low energy for OTA firmware updates can be the following scenario; a device gets a command via the Sub-1 GHz interface to switch to Bluetooth low energy mode, the user then connects to the device using Bluetooth low energy, once connected a new firmware image is transferred via the Bluetooth link. The device then restarts with the new firmware image loaded.

Firmware update

Figure 4: Bluetooth software: Fully connectable stack  Sub-1 GHz software: TI 15.4-Stack or a legacy Sub-1 GHz solution.

3.    Using the smartphone as a remote display

Making end products that are easy to use is essential for both consumer and professional products. Nice color displays are both expensive to use, develop, often mechanically weak and they increase the current consumption of the product. In many cases, the interface can be reduced greatly if a smartphone can be used as the display or alternatively an existing product can get enhanced features. Example: A wireless smoke detector that can use a smartphone to display battery status or the time since the last alarm sounded. Basically, any sensor network that has data to display can benefit from using a smartphone as a remote display instead of a standard LCD.

Remote display

Figure 5: Bluetooth software: Beacons, no Bluetooth low energy stack needed 

Sub-1 GHz software: TI 15.4-Stack or a legacy Sub-1 GHz solution.

4.    Managing Bluetooth low energy beacon payloads

One major benefit of using Sub-1 GHz is the longer range using the same output power. When updating a large set of Bluetooth low energy beacons with new payload information, having to

Figure 6: Bluetooth software: Beacons, no stack needed Sub-1 GHz software: TI 15.4-Stack or a legacy Sub-1 GHz solution.

physically approach each and every beacon might not be a manageable task. In this case, the Sub-1 GHz link can be used to connect to the beacon and give it new Bluetooth low energy payload information. This section describes a few use cases.

Google Physical Web

In the Google Physical Web concept, beacons are used to transmit a simple URL that is easily opened in a standard web browser. The advantage of this is the ease of use – no special app is needed, one just needs to create a web page that the Bluetooth beacon is pointing to. The Sub-1 GHz link is used to manage the beacon, which basically is used to change the web link.

Google Physical Web is using the open source Eddystone specification for the Bluetooth low energy beacon frame format. A few different frame formats are specified:

  1. URL broadcast a standard URL
  2. TLM, Type Length Message used to broadcast sensor data like battery level, time since the reboot, etc.
  3. UDI, Unique Device Identifier, used for proximity use cases.

Examples: A movie theater that announces the next movie using a Bluetooth beacon in multiple places around the movie theater. The Sub-1 GHz link is used to update the “digital posters” every time there is a new movie showing.

Proprietary beacons

When there is no need to be interoperable with other applications, you might consider implementing your own Bluetooth low energy beacon frame format. One example is the TI SimpleLink SensorTag kit application, where a proprietary frame format is

 

used to interact with devices from the smartphone application.

 

Getting started

The out-of-the-box software for the CC1350 wireless MCU demonstrates many of the use cases described in this paper. The software can be found at this link.

The SimpleLink dual-band CC1350 wireless MCU LaunchPad™ development kit is preprogrammed with the TI BLE-Stack, allowing you to connect to the device using the SensorTag iOS/ Android smartphone app. When connected, the

The cc1350 device offers the same functionality as the SimpleLink multi-standard CC2650 LaunchPad kit. Using the built-in Bluetooth low energy OTA download, one can easily convert the CC1350 device from a Bluetooth low energy device into

a Sub-1 GHz device, due to the dual-mode capabilities. The step-by-step guide on the above link will show you how to download new application images to create a small wireless sensor network. The sensor network includes a concentrator that receives Sub-1 GHz data and nodes that send data over the Sub-1 GHz link to the concentrator and in addition reconfigures the radio core on the fly to send out Bluetooth low energy advertisement packets.

AVR Microcontroller Memory Architecture

avr_memory_01_lrg

What is a memory map?

The memory map of a microcontroller is a diagram which gives the size, type, and layout of the memories that are available in the microcontroller. The information uses to construct the memory map is extracted from the datasheet of the microcontroller.

The ATMega8515 microcontroller contains three(3) blocks of memory: Program Memory, EEPROM Memory, and Data Memory.

Data Memory Contains:

  • 32 8-bits General Purpose
  • 64 8-bits Input/Output Registers
  • 512 8-bits SRAM space

Program Memory Contains:

  • 8K byte Flash Memory
  • Organized as 4K-16bits space

EEPROM Memory Contains:

  • 512 8-bits EEPROM space

Flash

Flash is nonvolatile memory, which means it persists when power is removed. Its purpose is to hold instructions that the microcontroller executes. The amount of flash can range from 512 bytes on an ATTiny to 384K on an ATxmega384A1. AVR microcontrollers can be thought of having 2 modes, a flash programming and a flash executing mode.

By modifying fuse settings (BOOTSZ0 & BOOTSZ1 on the ATmega168) some AVR microcontrollers allow you to reserve sections of flash for a bootloader and reserved application flash section. The bootloader allows the flash programming process to be controlled by a flash resident program. Some bootloader applications might include.

  • Decrypt encrypted flash files to prevent reverse engineering
  • Implement a self-destruct sequence triggered by a tamper sensor
  • Allow the device to be programmed from a TFTP server

RAM

RAM is a volatile memory that stores the runtime state of the program being executed. The amount of RAM can range from 32 bytes on an ATTiny28L to 32KB on an ATxmega384A1. In many AVR microcontrollers RAM is split into 4 subsections:

  • General purpose registers
  • I/O registers
  • Extended I/O registers
  • Internal RAM

AVR microcontrollers have RAM on-chip but some AVRs (e.g. ATMega128) can use external RAM modules to extend what is built into the microcontroller.

EEPROM

EEPROM is nonvolatile memory which is used to store data. The most common use is to store configurable parameters. The amount of EEPROM can range from 32 bytes on an ATTiny to 4KB on an XMega.

3 Big Technologies that will shape the FUTURE of your Business

iot-poster

Internet of Things 

smart-houseInternet of Things (IoT) describes an emerging trend where a large number of embedded devices (things) are connected to the Internet. These connected devices communicate with people and other things and often provide sensor data to cloud storage and cloud computing resources where the data is processed and analyzed to gain important insights. Cheap cloud computing power and increased device connectivity is enabling this trend.

internetofthings.png

At a high level, many IoT systems can be described using the diagram above. The left side of the diagram illustrates edge nodes. Edge nodes are devices that collect data and include devices such as wireless temperatures sensors, heart rate monitors, and hydraulic pressure sensors. The middle of the diagram shows the data aggregator. The aggregator collects, processes and stores data from many edge nodes that are often geographically dispersed, and it may have the capability to analyze and take action on the incoming data.

Cyber Security

Network outages, data compromised by hackers, computer viruses and other incidents affect our lives in ways that range from inconvenient to life-threatening. As the number of mobile users, digital applications and data networks increase, so do the opportunities for exploitation.

shutterstock_165303932

WHAT IS CYBER SECURITY?

Cyber security, also referred to as information technology security, focuses on protecting computers, networks, programs and data from unintended or unauthorized access, change or destruction.

WHY IS CYBER SECURITY IMPORTANT?

Governments, military, corporations, financial institutions, hospitals and other businesses collect, process and store a great deal of confidential information on computers and transmit that data across networks to other computers. With the growing volume and sophistication of cyber attacks, ongoing attention is required to protect sensitive business and personal information, as well as safeguard national security.

During a Senate hearing in March 2013, the nation’s top intelligence officials warned that cyber attacks and digital spying are the top threat to national security, eclipsing terrorism.

Machine Learning

Evolution of machine learning

Because of new computing technologies, machine learning today is not like machine learning of the past. It was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks; researchers interested in artificial intelligence wanted to see if computers could learn from data. The iterative aspect of machine learning is important because as models are exposed to new data, they are able to independently adapt. They learn from previous computations to produce reliable, repeatable decisions and results. It’s a science that’s not new – but one that’s gaining fresh momentum.

While many machine learning algorithms have been around for a long time, the ability to automatically apply complex mathematical calculations to big data – over and over, faster and faster – is a recent development. Here are a few widely publicized examples of machine learning applications you may be familiar with:

  • The heavily hyped, self-driving Google car? The essence of machine learning.
  • Online recommendation offers such as those from Amazon and Netflix? Machine learning applications for everyday life.
  • Knowing what customers are saying about you on Twitter? Machine learning combined with linguistic rule creation.
  • Fraud detection? One of the more obvious, important uses in our world today.

canstockphoto10834794

Why is machine learning important?

Resurging interest in machine learning is due to the same factors that have made data mining and Bayesian analysis more popular than ever. Things like growing volumes and varieties of available data, computational processing that is cheaper and more powerful, and affordable data storage.

All of these things mean it’s possible to quickly and automatically produce models that can analyze bigger, more complex data and deliver faster, more accurate results – even on a very large scale. And by building precise models, an organization has a better chance of identifying profitable opportunities – or avoiding unknown risks.

What’s required to create good machine learning systems?

  • Data preparation capabilities.
  • Algorithms – basic and advanced.
  • Automation and iterative processes.
  • Scalability.
  • Ensemble modeling.
Machine learning infographic

Did you know?

  • In machine learning, a target is called a label.
  • In statistics, a target is called a dependent variable.
  • A variable in statistics is called a feature in machine learning.
  • A transformation in statistics is called feature creation in machine learning.