Stat-Ease Blog

Blog

2019 Analytics Solutions Conference Wrap Up

posted by Greg on July 12, 2019

We recently wrapped up the 2019 Analytics Solutions Conference in Minneapolis, MN. This was the first time we organized a US conference on analytics with our Norwegian partners, Camo Analytics. It could not have gone better! A BIG congratulations on a job well done to Shari Kraber who spearheaded the organization of this conference.

The goal of the conference was to help attendees transform their business, from R&D to production, using data-driven tools. If I may toot our own horn here, we did that. There was a lot of information to digest at this meeting, but the camaraderie of attendees made it easy. We overheard many conversations in the halls between attendees about what they just learned or discussing potential solutions to business problems they were facing.

The conference started with a short course day. Two all day seminars were conducted in parallel, one titled “Practical DOE: ‘Tricks of the Trade’” and the other “Realizing Industry 4.0 through Industrial Analytics”. We saw a lot of people experienced with design of experiments (DOE) go to the Industry 4.0 course and a lot of the analytics pros headed to learn about DOE. What a great way to start!

Day one wrapped up with an opening reception for all attendees and speakers, fostering networking and building relationships.

The conference days were filled with session talks that wrapped around keynote presentations (a fantastic trio of speakers, find out more at the link below). The result was a great mix of academic-style overviews with down-to-earth, real-world case studies. It struck a perfect balance.

EdGooding6a.jpg

If you couldn’t make it to the conference, details on each presentation (including pdf’s of the presentations), are posted on the ASC speakers page.

To add some fun to the mix, the evening’s social event was a dinner cruise on the Mississippi River for all attendees. It was quite the treat, especially to those of us who live here! We rarely take the opportunity to enjoy activities like this in our own backyard. We traveled thru a lock, spotted a bald eagle in the trees, and enjoyed the casual atmosphere.

Cruise.jpg
Cruise2.jpg

The next Stat-Ease hosted conference will be our 8th European DOE Users Conference. This year it will be held in Groningen, Netherlands from June 17-19. We hope to see you there!

If you would like to be kept in the loop about the conference, sign up for our mailing list. We will be sending out information regarding a call for speakers and registration later in the fall. Sign up now, before you forget!

See you in Groningen!


Four Tips for Graduate Students' Research Projects

posted by Shari on May 22, 2019

Graduate students are frequently expected to use design of experiments (DOE) in their thesis project, often without much DOE background or support. This results in some classic mistakes.

  1. Designs that were popular in the 1970’s-1990’s (before computers were widely available) have been replaced with more sophisticated alternatives. A common mistake – using a Plackett-Burman (PB) design for either screening purposes, or to gain process understanding for a system that is highly likely to have interactions. PB designs are badly aliased resolution III, thus any interactions present in the system will cause many of the main effect estimates to be biased. This increases the internal noise of the design and can easily cause misleading and inaccurate results. Better designs for screening are regular two-level factorials at resolution IV or minimum-run (MR) designs. For details on PB, regular and MR designs, read DOE Simplified.
  2. Reducing the number of replicated points will likely result in losing important information. A common mistake – reducing the number of center points in a response surface design down to one. The replicated center points provide an estimate of pure error, which is necessary to calculate the lack of fit statistic. Perhaps even more importantly, they reduce the standard error of prediction in the middle of the design space. Eliminating the replication may mean that results in the middle of the design space (where the optimum is likely to be) have more prediction error than results at the edges of the design space!
  3. If you plan to use DOE software to analyze the results, then use the same software at the start to create the design. A common mistake – designing the experiment based on traditional engineering practices, rather than on statistical best practices. The software very likely has recommended defaults that will make a better design that what you can plan on your own.
  4. Plan your experimentation budget to include confirmation runs after the DOE has been run and analyzed. A common mistake – assuming that the DOE results will be perfectly correct! In the real world, a process is not improved unless the results can be proven. It is necessary to return to the process and test the optimum settings to verify the results.

The number one thing to remember is this: Using previous student’s theses as a basis for yours, means that you may be repeating their mistakes and propagating poor practices! Don’t be afraid to forge a new path and showcase your talent for using state-of-the-art statistical designs and best practices.


Greg's DOE Adventure: Important Statistical Concepts behind DOE

posted by Greg on May 3, 2019

If you read my previous post, you will remember that design of experiments (DOE) is a systematic method used to find cause and effect. That systematic method includes a lot of (frightening music here!) statistics.

[I’ll be honest here. I was a biology major in college. I was forced to take a statistics course or two. I didn’t really understand why I had to take it. I also didn’t understand what was being taught. I know a lot of others who didn’t understand it as well. But it’s now starting to come into focus.]

Before getting into the concepts of DOE, we must get into the basic concepts of statistics (as they relate to DOE).

Basic Statistical Concepts:

Variability
In an experiment or process, you have inputs you control, the output you measure, and uncontrollable factors that influence the process (things like humidity). These uncontrollable factors (along with other things like sampling differences and measurement error) are what lead to variation in your results.

Mean/Average
We all pretty much know what this is right? Add up all your scores, divide by the number of scores, and you have the average score.

Normal distribution
Also known as a bell curve due to its shape. The peak of the curve is the average, and then it tails off to the left and right.

Variance
Variance is a measure of the variability in a system (see above). Let’s say you have a bunch of data points for an experiment. You can find the average of those points (above). For each data point subtract that average (so you see how far away each piece of data is away from the average). Then square that. Why? That way you get rid of the negative numbers; we only want positive numbers. Why? Because the next step is to add them all up, and you want a sum of all the differences without negative numbers getting in the way. Now divide that number by the number of data points you started with. You are essentially taking an average of the squares of the differences from the mean.

That is your variance. Summarized by the following equation:

\(s^2 = \frac{\Sigma(Y_i - \bar{Y})^2}{(n - 1)}\)

In this equation:


Yi is a data point
Ȳ is the average of all the data points
n is the number of data points

Standard Deviation
Take the square root of the variance. The variance is the average of the squares of the differences from the mean. Now you are taking the square root of that number to get back to the original units. One item I just found out: even though standard deviations are in the original units, you can’t add and subtract them. You have to keep it as variance (s2), do your math, then convert back.


Greg's DOE Adventure: What is Design of Experiments (DOE)?

posted by Greg on April 19, 2019


Hi there. I’m Greg. I’m starting a trip. This is an educational journey through the concept of design of experiments (DOE). I’m doing this to better understand the company I work for (Stat-Ease), the product we create (Design-Expert® software), and the people we sell it to (industrial experimenters). I will be learning as much as I can on this topic, then I’ll write about it. So, hopefully, you can learn along with me. If you have any comments or questions, please feel free to comment at the bottom.

So, off we go. First things first.

What exactly is design of experiments (DOE)?

When I first decided to do this, I went to Wikipedia to see what they said about DOE. No help there.

“The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.” –Wikipedia

The what now?

That’s not what I would call a clearly conveyed message. After some more research, I have compiled this ‘definition’ of DOE:

Design of experiments (DOE), at its core, is a systematic method used to find cause-and-effect relationships. So, as you are running a process, DOE determines how changes in the inputs to that process change the output.

Obviously, that works for me since I wrote it. But does it work for you?

So, conceptually I’m off and running. But why do we need ‘designed experiments’? After all, isn’t all experimentation about combining some inputs, measuring the outputs, and looking at what happened?

The key words above are ‘systematic method’. Turns out, if we stick to statistical concepts we can get a lot more out of our experiments. That is what I’m here for. Understanding these ‘concepts’ within this ‘systematic method’ and how this is advantageous.

Well, off I go on my journey!


Correlation vs. causality

posted by Greg on April 5, 2019


Recently, Stat-Ease Founding Principal, Pat Whitcomb, was interviewed to get his thoughts on design of experiments (DOE) and industrial analytics. It was very interesting, especially to this relative newbie to DOE. One passage really jumped out at me:

“Industrial analytics is all about getting meaning from data. Data is speaking and analytics is the listening device, but you need a hearing aid to distinguish correlation from causality. According to Pat Whitcomb, design of experiments (DOE) is exactly that. ‘Even though you have tons of data, you still have unanswered questions. You need to find the drivers, and then use them to advance the process in the desired direction. You need to be able to see what is truly important and what is not,’ says Pat Whitcomb, Stat-Ease founder and DOE expert. ‘Correlations between data may lead you to assume something and lead you on a wrong path. Design of experiments is about testing if a controlled change of input makes a difference in output. The method allows you to ask questions of your process and get a scientific answer. Having established a specific causality, you have a perfect point to use data, modelling and analytics to improve, secure and optimize the process.’"

It was the line ‘distinguish correlation from causality’ that got me thinking. It’s a powerful difference, one that most people don’t understand.

As I was mulling over this topic, I got into my car to drive home and played one of the podcasts I listen to regularly. It happened to be an interview with psychologist Dr. Fjola Helgadottir and her research into social media and mental health. As you may know, there has been a lot of attention paid to depression and social media use. When she brought up the concept of correlation and causality it naturally caught my attention. (And no, let’s not get into Jung’s concept of Synchronicity and whether this was a meaningful coincidence or not.)

The interesting thing that Dr. Helgadottir brought up was the correlation between social media and depression. That correlation is misunderstood by the general population as causality. She went on to say that recent research has not shown any causality between the two but has shown that people who are depressed are more likely to use social media more than other people. So there is a correlation between social media and depression, but one does not cause the other.

So, back to Pat’s comments. The data is speaking. We all need a listening device to tell us what it’s saying. For those of you in the world of industrial experimentation, experimental design can be that device that differentiates the correlations from the causality.