Yearly, the world is inundated with news about government data collection programs. In addition to these programs, governments collect data from third party sources to gather information about individuals. This data in conjunction with machine learning aids governments in determining where crime will be committed and who has committed a crime. Could this data serve as a method by which governments predict whether or not the individual will commit a crime? This talk will examine the use of big data in the context of predictive policing. Specifically, how does the data collected inform suspicion about a particular individual? In the context of U.S. law, can big data alone establish reasonable suspicion or should it just factor into the totality of the circumstances? How do we mitigate the biases that might exist in large data sets?