Extracting Seasonality and Trend from Data: Decomposition Using R

Clique na imagem para seguir o link.

Uma excelente descrição da decomposição clássica com Python e R.

Time series decomposition works by splitting a time series into three components: seasonality, trends and random fluctiation. To show how this works, we will study the decompose( ) and STL( ) functions in the R language.

Understanding Decomposition

Decompose One Time Series into Multiple Series

Time series decomposition is a mathematical procedure which transforms a time series into multiple different time series. The original time series is often split into 3 component series:

  • Seasonal: Patterns that repeat with a fixed period of time. For example, a website might receive more visits during weekends; this would produce data with a seasonality of 7 days.
  • Trend: The underlying trend of the metrics. A website increasing in popularity should show a general trend that goes up.
  • Random: Also call “noise”, “irregular” or “remainder,” this is the residuals of the original time series after the seasonal and trend series are removed.

Tags: , , ,

Making it easier to discover datasets

Clique na imagem para seguir o link

Clique na imagem para seguir o link

Novo recurso da google para identificar conjuntos de dados.

In today’s world, scientists in many disciplines and a growing number of journalists live and breathe data. There are many thousands of data repositories on the web, providing access to millions of datasets; and local and national governments around the world publish their data as well. To enable easy access to this data, we launched Dataset Search, so that scientists, data journalists, data geeks, or anyone else can find the data required for their work and their stories, or simply to satisfy their intellectual curiosity.

Tags:

When Variable Reduction Doesn’t Work

clique na imagem para seguir o link

clique na imagem para seguir o link

Um bom exemplo de como os procedimentos habituais nem sempre funcionam

Summary: Exceptions sometimes make the best rules.  Here’s an example of well accepted variable reduction techniques resulting in an inferior model and a case for dramatically expanding the number of variables we start with.

of the things that keeps us data scientists on our toes is that the well-established rules-of-thumb don’t always work.  Certainly one of the most well-worn of these rules is the parsimonious model; always seek to create the best model with the fewest variables.  And woe to you who violate this rule.  Your model will over fit, include false random correlations, or at very least will just be judged to be slow and clunky.

Certainly this is a rule I embrace when building models so I was surprised and then delighted to find a well conducted study by Lexis/Nexis that lays out a case where this clearly isn’t true.

Tags:

How To Forecast Time Series Data With Multiple Seasonal Periods

clique na imagem para seguir o link

clique na imagem para seguir o link

Análise de séries complexas com múltiplos períodos sazonais

Time series data is produced in domains such as IT operations, manufacturing, and telecommunications. Examples of time series data include the number of client logins to a website on a daily basis, cell phone traffic collected per minute, and temperature variation in a region by the hour. Forecasting a time series signal ahead of time helps us make decisions such as planning capacity and estimating demand. Previous time series analysis blog posts focused on processing time series data that resides on Greenplum database using SQL functions. In this post, I will examine the modeling steps involved in forecasting a time series sequence with multiple seasonal periods. The various steps involved are outlined below:

  • Multiple seasonality is modelled with the help of fourier series with different periods
  • External regressors in the form of fourier terms are added to an ARIMA model to account for the seasonal behavior
  • Akaike Information Criteria (AIC) is used to find the best fit model

Tags:

Avoiding a common mistake with time series

clique na imagem para seguir o link

clique na imagem para seguir o link

Um caso em q a tendência mascara o resto da série criando correlações elevadas

A basic mantra in statistics and data science is correlation is not causation, meaning that just because two things appear to be related to each other doesn’t mean that one causes the other. This is a lesson worth learning.

If you work with data, throughout your career you’ll probably have to re-learn it several times. But you often see the principle demonstrated with a graph like this:

Dow Jones vs. Jennifer Lawrence

One line is something like a stock market index, and the other is an (almost certainly) unrelated time series like “Number of times Jennifer Lawrence is mentioned in the media.” The lines look amusingly similar. There is usually a statement like: “Correlation = 0.86”.  Recall that a correlation coefficient is between +1 (a perfect linear relationship) and -1 (perfectly inversely related), with zero meaning no linear relationship at all.  0.86 is a high value, demonstrating that the statistical relationship of the two time series is strong.

The correlation passes a statistical test. This is a great example of mistaking correlation for causality, right? Well, no, not really: it’s actually a time series problem analyzed poorly, and a mistake that could have been avoided. You never should have seen this correlation in the first place.

The more basic problem is that the author is comparing two trended time series. The rest of this post will explain what that means, why it’s bad, and how you can avoid it fairly simply. If any of your data involves samples taken over time, and you’re exploring relationships between the series, you’ll want to read on.

Tags:

How and Why: Decorrelate Time Series

clique na imagem para seguir o link

clique na imagem para seguir o link

O problemas das autocorrelações nas séries cronológicas.

When dealing with time series, the first step consists in isolating trends and periodicites. Once this is done, we are left with a normalized time series, and studying the auto-correlation structure is the next step, called model fitting. The purpose is to check whether the underlying data follows some well known stochastic process with a similar auto-correlation structure, such as ARMA processes, using tools such as Box and Jenkins. Once a fit with a specific model is found, model parameters can be estimated and used to make predictions.

A deeper investigation consists in isolating the auto-correlations to see whether the remaining values, once decorrelated, behave like white noise, or not. If departure from white noise is found (using a few tests of randomness), then it means that the time series in question exhibits unusual patterns not explained by trends, seasonality or auto correlations. This can be useful knowledge in some contexts  such as high frequency trading, random number generation, cryptography or cyber-security. The analysis of decorrelated residuals can also help identify change points and instances of slope changes in time series, or reveal otherwise undetected outliers.

Tags:

The 7 Most Important Data Mining Techniques

clique na imagem para seguir o link

clique na imagem para seguir o link

Pequena introdução a ulguns dos métodos mais usados em data mining

Data mining is the process of looking at large banks of information to generate new information. Intuitively, you might think that data “mining” refers to the extraction of new data, but this isn’t the case; instead, data mining is about extrapolating patterns and new knowledge from the data you’ve already collected.

Relying on techniques and technologies from the intersection of database management, statistics, and machine learning, specialists in data mining have dedicated their careers to better understanding how to process and draw conclusions from vast amounts of information. But what are the techniques they use to make this happen?

Playground to Politics

clique no ícon para seguir o link

clique no ícon para seguir o link

Dados de um questionário a 50 professores londrinos.

A study of values and attitudes among fifth formers in a North London comprehensive school.

This survey of teenage attitudes and opinions in a North London comprehensive school (11-18 mixed) was designed and conducted, under my guidance and supervision, by three of my sophomore students as part of their group research dissertation for BA Applied Social Studies (Social Research) at the Polytechnic of North London (PNL, now part of London Metropolitan University).  . It aimed to discover something about pupils’ future expectations and awareness of, and attitudes towards, various current social issues and problems, particularly racism and sexism. It replicates various items and scales from other work (Wilson-Patterson, Eysenck, Himmelweit, Srole-Christie) particularly the St Paul’s Girls senior pupils study (Feb 1973) some of which were also used in the SSRC Survey Unit Quality of Life surveys 1971-75.

The self-completion questionnaire was completed in December 1981 by all fifth form pupils present on the day of the survey (N=142).  It was administered during time-tabled Social Studies classes and, time permitting, was followed by discussion with class teachers and the PNL students of the issues covered in the survey.

Given the particularly high quality of this project, a user manual was prepared by John Hall and Alison Walker for use with the postgraduate Survey Analysis Workshop and the undergraduate course Data Management and Analysis. It serves as model documentation for similar small survey projects.

Tags:

curso de KNIME

clicar na imagem para seguir o link

clicar na imagem para seguir o link

Muito bom curso de KNIME, é introdutório mas introduz um grande número de funcionalidades.

KNIME Online Self-Training

Welcome to the KNIME Self-training course. The focus of this document is to get you started with KNIME as quickly as possible and guide you through essential steps of advanced analytics with KNIME. Optional and very useful topics such as reporting, KNIME Server and database handling are also included to give you an idea of what else is possible with KNIME.

  1. Installing KNIME Analytics Platform and Extensions
  2. Data Import / Export and Database / Big Data
  3. ETL
  4. Visualization
  5. Advanced Analytics
  6. Reporting
  7. KNIME Server

Tags: , , ,

MARS – Multivariate Adaptive Regression Splines

clique na imagem para seguir o link

clique na imagem para seguir o link

Boa descrição destes algoritmos de análise de dados pelos proprios autores

An Overview of MARS

What is “MARS”?

MARS®, an acronym for Multivariate Adaptive Regression Splines, is a multivariate non-parametric regression procedure introduced in 1991 by world-renowned Stanford statistician and physicist, Jerome Friedman (Friedman, 1991). Salford Systems’ MARS, based on the original code, has been substantially enhanced with new features and capabilities in exclusive collaboration with Friedman.

Tags: ,