One of the many things I appreciate about my current job is the trust I get from my boss. The trust manifests itself in a culture of creativity where I get to determine how to go about solving busiuness problems rather than being told what to do. Rather than telling me what to do, my boss just directs my attention to a challenge the company is facing or an area he wants to improve and then leave me to my own devices. We only have meetings to discuss the company’s direction and goals and to update him how the engagements are going.

Among the challenges the compay was facing, the low machine utilization, the proportion of time the machines are being utilized, is the most critical to operations. After a rewarding process of experimentation, learning and partership building we are now partnering with Amper, a Chicago based company that specializes in Factory Operating and machine tracking systems, to track utilization, record reasons for interruptions and systemize preventative maintenance.

Tracking machine effectiveness data using Amper’s factory operating system

Berore we got to where we are right now, I went through a journey of learning. I was trying to get to the bottom of what was causing low machine utilization by having conversations with all stakeholders involved in the process. The more conversations I had, however, the harder it got to visualize the problem using the traditional diagramming software (such as MS Visio).

A problem network built on Gliffy

So I started looking for other ways to visualize what was going on. I had worked on a social network project in school where we used R to analyze social interactions of people on the internet on different social media platform and wondered if this could apply to the problem-cause network. After falling down a handful of rabbit holes turns out it does. Using the igraph package on R, I was able to visualize causes of low machine utilization as nodes and edges in a problem network breaking down the graph by color based on department as source of the problem and size of nodes based on how many problems a source problem was causing. I attached the code I used to produce the graph below for those who are interested.

net <- graph_from_data_frame(d=links, vertices=nodes, directed=T)
colrs <- c("gray50", "tomato", "gold","azure2", "limegreen", "lightskyblue")
V(net)$color <- colrs[V(net)$Department.type]
E(net)$arrow.size= 1

deg <- degree(net, mode="all")+1
V(net)$size <- deg*6
V(net)$label <- V(net)$Problem
lo <- layout_with_kk(net)
lo <- norm_coords(lo, ymin=-1, ymax=1, xmin=-1, xmax=1)
par(mar=c(0,0,0,0))
plot(net, layout=lo, edge.color = "slategrey", edge.width = 2, edge.arrow.mode=T)

legend(x=1, y=0.8, c("Operations","Programming", "Production", "Sales", "Inventory", "Installation"), pch=21,
       col="#777777", pt.bg=colrs, pt.cex=2, cex=.8, bty="n", ncol=1)

dev.off()
A problem network built on R using the igraph package

Even though this might seem a simple visual/ task, it helped me understand the importance of finding a systematic way to record the reasons for interruptoins and analyzing it to improve utilization. It also helped me become more comfortable exploring the power of analytical tools such as R and use it to solve problems.

Everytime I learn something new I get more excited about the possibilities and oppurtunities that technology helps us create and I am looking forward to more than ever to continue of my journey of never ending learning and