A Trunkful of Win-Vector R Packages

Suitcasestickers

If you follow the Win-Vector blog, you know that we have developed a number of R packages that encapsulate our data science working process and philosophy. The biggest package, of course, is our data preparation package, vtreat, which implements many of the data treatment principles that I describe in my white-paper, here. Read more of this post

New Win-Vector Package replyr: for easier dplyr

Using dplyr with a specific data frame, where all the columns are known, is an effective and pleasant way to execute declarative (SQL-like) operations on dataframes and dataframe-like objects in R. It also has the advantage of working not only on local data, but also on dplyr-supported remote data stores, like SQL databases or Spark.

However, once we know longer know the column names, the pleasure quickly fades. The currently recommended way to handle dplyr‘s non-standard evaluation is via the lazyeval package. This is not pretty. I never want to write anything like the following, ever again.

# target is a moving target, so to speak
target = "column_I_want"

library(lazyeval)

# return all the rows where target column is NA
dframe %>%
  filter_(interp(~ is.na(col), col=as.name(target)) ) 

This example is fairly simple, but the more complex the dplyr expression, and the more columns involved, the more unwieldy the lazyeval solution becomes.

The difficulty of parameterizing dplyr expressions is part of the motivation for Win-Vector’s new package, replyr. I’ve just posted an article to the Win-Vector blog, on the function replyr::let, which lets us parametrize dplyr expressions without lazyeval.

Read more of this post

Upcoming Talks

I will be speaking at the Women who Code Silicon Valley meetup on Thursday, October 27.

The talk is called Improving Prediction using Nested Models and Simulated Out-of-Sample Data.

In this talk I will discuss nested predictive models. These are models that predict an outcome or dependent variable (called y) using additional submodels that have also been built with knowledge of y. Practical applications of nested models include “the wisdom of crowds”, prediction markets, variable re-encoding, ensemble learning, stacked learning, and superlearners.

Nested models can improve prediction performance relative to single models, but they introduce a number of undesirable biases and operational issues, and when they are improperly used, are statistically unsound. However modern practitioners have made effective, correct use of these techniques. In my talk I will give concrete examples of nested models, how they can fail, and how to fix failures. The solutions we will discuss include advanced data partitioning, simulated out-of-sample data, and ideas from differential privacy. The theme of the talk is that with proper techniques, these powerful methods can be safely used.

John Mount and I will also be giving a workshop called A Unified View of Model Evaluation at ODSC West 2016 on November 4 (the premium workshop sessions), and November 5 (the general workshop sessions).

We will present a unified framework for predictive model construction and evaluation. Using this perspective we will work through crucial issues from classical statistical methodology, large data treatment, variable selection, ensemble methods, and all the way through stacking/super-learning. We will present R code demonstrating principled techniques for preparing data, scoring models, estimating model reliability, and producing decisive visualizations. In this workshop we will share example data, methods, graphics, and code.

I’m looking forward to these talks, and to seeing those of you who can attend.

Practical Data Science with R now in Chinese Translation!

Our publisher, Manning, has kindly sent us complimentary copies of the new Simplified Chinese translation of Practical Data Science with R.

PDSwRChinese

We can’t read it, of course, but it’s cool (and a bit intimidating) to see what our work looks like in another language and character set. Here are a couple of peeks inside, just for fun.

IMG_2100

(Click for a bigger photo)

IMG_2099

(Click for a bigger photo)

I wonder if Manning is planning any other translated editions? I’ll keep you posted.

Principal Components Regression: A Three-Part Series and Upcoming Talk

Well, since the last time I posted here, the Y-Aware PCR series has grown to three parts! I’m pleased with how it came out. The three parts are as follows:

  • Part 1: A review of standard “x-only” PCR, with a worked example. I also show some issues that can arise with the standard approach.
  • Part 2: An introduction to y-aware scaling to guide PCA in identifying principal components most relevant to the outcome of interest. Y-aware PCA helps alleviate the issues that came up in Part 1.
  • Part 3: How to pick the appropriate number of principal components.

global_4865686

I will also be giving a short talk on y-aware principal components analysis in R at the August Bay Area useR Group meetup on August 9, along with talks by consultant Allan Miller and Jocelyn Barker from Microsoft. It promises to be an interesting evening.

The meetup will be at Guardant Health in Redwood City. Hope to see you there.