Loading…
FloCon 2017 has ended
Great Room V-VIII [clear filter]
Tuesday, January 10
 

8:30am PST

Conference Introduction
A brief welcome from our Conference Chair, Ron Bandes, and Ken Slaght of the San Diego Cyber Center of Excellence.



Tuesday January 10, 2017 8:30am - 9:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:00am PST

Finding the Needle in the Haystack
With all the information available via NetFlows, finding the "Needle in the Haystack" (the bad actor in NetFlows), can be somewhat difficult at best. Methods to discover illegitimate traffic can be as simple as looking at TCP flags, to more complex procedure such as defining thresholds for number of flows with ratios to unique destinations. There are other methods available, but I will be focusing on these thresholds and ratios and why this approach turns the needle into a goal post. The CPU cycles needed for this analysis are reduced by implementation of AVL trees (Balanced Binary Trees), and knowing the bottleneck to process the data is based on reading the data from disc. The algorithm used takes less then a second to process 3 million flows collected over a 5 minute time span. Both inbound and outbound, as well as local, traffic needs to be considered. Inbound analysis will help protect against external threats, outbound traffic protects yourself from external With all the information available via NetFlows, finding the "Needle in the Haystack" (the bad actor in NetFlows), can be somewhat difficult at best. Methods to discover illegitimate traffic can be as simple as looking at TCP flags, to more complex procedure such as defining thresholds for number of flows with ratios to unique destinations. There are other methods available, but I will be focusing on these thresholds and ratios and why this approach turns the needle into a goal post. The CPU cycles needed for this analysis are reduced by implementation of AVL trees (Balanced Binary Trees), and knowing the bottleneck to process the data is based on reading the data from disc. The algorithm used takes less then a second to process 3 million flows collected over a 5 minute time span. Both inbound and outbound, as well as local, traffic needs to be considered. Inbound analysis will help protect against external threats, outbound traffic protects yourself from external embarassment, and local analysis identifies local problems that can lead to bigger problems., and local analysis identifies local problems that can lead to bigger problems.

Speakers
JJ

Jonzy Jones

University of Utah
Jonzy has been an employee at the University of Utah for more than 30 years.  He started out as email Postmaster and moved into security after a system breach. Prior to getting into security he was the author of Jughead, now called Jugtail, which was a search engine in Gopher space... Read More →



Tuesday January 10, 2017 9:00am - 9:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:30am PST

Assessing Targeted Attacks in Incident Response Threat Correlation
The current number of active cyber threats is astounding. Do you know which threats are targeting you right now and which threats are likely tocause greatest harm to your company?
This session examines how correlating network flow data with cyber threat information during incident response provides knowledge of not only what threats are active or targeting you, but which of your assets are being targeted before or during an incident. We examine the many data types used in commonly-shared indicators of compromise and explore which provide for automating correlation with network flow data. The pros and cons of common correlation algorithms are discussed with a focus towards their contributions and limitations to enhancing threat intelligence efforts. Proper network flow correlation should provide a foundation for performing risk-based mitigation that identifies the threats that are creating the
greatest loss of value for your organization rather than chasing down the threats deemed most harmful by the industry.

Speakers
avatar for Jamison Day

Jamison Day

LookingGlass Cyber Solutions, Inc.
Jamison M. Day is a Decision Science PhD that was selected as 1 of 5 members nation-wide to serve on a Supply Chain Security Team for the U.S. Director of National Intelligence. His interactive analytics products have helped Microsoft and the Department of Homeland Security reduce... Read More →
avatar for Allan Thomson

Allan Thomson

LookingGlass Cyber Solutions, Inc.
As LookingGlass Chief Technology Officer, Allan Thomson has more than three decades of experience across network, security and distributed systems technologies. Allan leads technical strategy, architecture and product development across all LookingGlass Dynamic Threat Defense product... Read More →



Tuesday January 10, 2017 9:30am - 10:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

10:00am PST

SilkWeb - Analyzing Silk Data through API and Javascript Frameworks
SilkWeb demo will showcase the SilkWeb tool built with API's and some modern Javascript frameworks to analyze SiLK network flow data. SilkWeb creates simple webservices data interfaces which can be used to replace some of the command line queries with webservice request. This opens up a number of opportunities for visualization, integration and automation. A simple setup of jQuery based interfaces will be showcased that will demo the use of Javascript frameworks to visualize Silk data and onboard a junior analyst to understand Netflow. There is also an open opportunity for integration of Silk data to other tools like SIEM using a simple webservices requests over the network. The webserver can
produce this data to number through an interface like REST interface to automate routine tasks.

The demo will showcase the use of this software in ISP to do routine tasks and provide a quick way for network and security personnel to query and navigate netflow data. Some of the use cases that ISP today use this for will be covered in the demo 1. DDOS detection using a number of simple steps to walk through and find offending customers. 2. Abuse misuse detection using a set of criteria to find customers who violate policy and increase risk to the ISP environment 3. Detection of malicious probes into the server networks using anomalous network traffic.

These will be demonstrated from an ISP who uses SiLK and SilkWeb to meet these needs.

Speakers
avatar for Vijay Sarvepali

Vijay Sarvepali

Senior Member of the Technical Staff, CERT Division - Software Engineering Institute
Vijay Sarvepalli is a senior member of the technical staff for the CERT® Coordination Center in the CERT Program at the Software Engineering Institute (SEI). As a member of the Monitoring and Response directorate, he supports sponsors in multiple areas from enterprise architecture... Read More →



Tuesday January 10, 2017 10:00am - 10:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

11:00am PST

Challenges and Opportunities in Protecting the World's Largest Network
Sandra J. Radesky of DISA Global will provide a Keynote address.

Speakers
avatar for Sandra J. Radesky (Parris)

Sandra J. Radesky (Parris)

Deputy, Future Plans and Programs Division and Lead Cyber Strategist, Defense Information Systems Agency Global Operations Command (DISA Global)
Deputy, Future Plans and Programs Division and Lead Cyber Strategist, Defense Information Systems Agency (DISA) Global Operations Command (DISA Global)DISA Global operates, maintains and defends the Department of Defense Information Network (DoDIN) and provides Defensive Cyber Operations... Read More →


Tuesday January 10, 2017 11:00am - 12:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:00pm PST

Backbone Network DRDoS Attack Monitoring and Analysis
DRDoS (Distributed Reflection Denial of Service) now is the most popular and powerful DDoS method. As the continual reports like "The largest DDoS attack - XXX G bps attack against YYY is ongoing" we can see, the "YYY" may change or not while the "XXX" is steadily rising. We have to know it better if we want to solve this problem better. Based on backbone network traffic we can get, our team runs the Chinese biggest public available PassiveDNS database (passivedns.cn), and the Global DDoS Attack Detection System (ddosmon.net). In this talk, I want to share some of my practical experience about DRDoS monitoring in backbone network, and some analysis results from the perspective of our data.

The following questions will be covered:

  1. DRDoS: the most popular and powerful DDoS method a) DDoS in all network traffic b) DRDoS in all DDoS
  2. DRDoS Monitoring in Netflow a)Process Architecture & Data Modeling b) Keypoint Feature: package size length/dispersion, talker dispersion, well-known port, fragmentedpackets c) Partial Data d) ICMP as Side Effect Indicator e) Interesting Case: tracking unknown amplifier
  3. DRDoS Monitoring in PDNS a)Observing Point Matters b) Keypoint Feature: src port/transaction id/query type c) Side Effect: query spike to authority server d) Interesting Case:bug caused attack fail
  4. Cross Validation
  5. Amplifier Utilization Report: Kill Top, Kill Half
  6. FQDN Utilization Report: Kill Top, Kill Almost All

Speakers
avatar for Yang Xu

Yang Xu

Yang is a Network Security Engineer with 6 years of experience in the field and currently a member of Network Security Research Lab at Qihoo 360 (Netlab) where he focuses on network/passive-DNS, data process/analysis, and threat research like DDoS Monitoring, Scanner Tracking. Before... Read More →



Tuesday January 10, 2017 1:00pm - 1:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:30pm PST

DDoS Defense with a Community of Peers (3DCoP)
Distributed Denial of Service (DDoS) attacks have grown dramatically in size over the last few years. Modern amplification attacks can easily generate over 500 Gbps of traffic, threatening companies, ISPs and cloud infrastructure. To help defend against these advanced threats, Galois is developing 3DCoP: a peer-to-peer (P2P) system that uses collaboration between networks to detect and mitigate malicious traffic. 3DCoP analyzes traffic and shares information about suspicious patterns, allowing the community of peers to detect and respond to threats before their networks are overwhelmed with traffic. Our simulations show that 3DCoP may be able to detect spoofed IP addresses and suppress
amplification-based DDoS attacks.

In our system, each network runs a 3DCoP node that monitors the traffic crossing their boundaries. The nodes are connected to each other over a decentralized P2P network, allowing messages to be exchanged out-of-band over various transport mechanisms as needed, giving resilience and flexibility under attack conditions.

With 3DCoP, different networks can exchange messages about their flows, effectively letting them talk about their traffic. This innovation leads to many interesting possibilities, and in this project we are focusing on using this flow-sharing to achieve DDoS defense. Even with a minimal deployment of 3DCoP nodes, it may be possible to mitigate DDoS attacks closer to their sources. This innovative system potentially gives small and medium sized networks the ability to defend themselves against even the largest scale DDoS attacks.

This project is the result of funding provided by the Science and Technology Directorate of the United States Department of Homeland Security under contract number D15PC00185. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department of Homeland Security, or the U.S. Government.

Speakers
avatar for Jem Berkes

Jem Berkes

Galois, Inc.
Mr. Berkes has 15 years of experience developing software to defend against Internet-based threats, particularly malware, remote exploits, and spam. At Galois, Mr. Berkes is the Research Lead for DDoS Defense and previously worked on experimental operating system defenses and probabilistic... Read More →



Tuesday January 10, 2017 1:30pm - 2:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

2:00pm PST

Netflow Collection and Analysis at an Internet Peering

Analysis of IP flow records from Internet peering points provides some interesting challenges. The total volume is large to say the least, the number of hosts very large and diverse and the number of flows per Gbps of bandwidth is larger than most enterprises. The traffic is all asymmetrical and the infrastructure seems to be always evolving. The challenges are all surmountable and the analysis is effective and useful. The infrastructure seems to be always evolving so that just when the evolution from SONET cores to everything Ethernet is completed the introduction of orchestrated NFVs begin. The new infrastructures again provide some challenges but they also provide some opportunities for new approaches using orchestrated Security Functions Virtualization (SFVs or SNFVs). The orchestration capabilities can enable scheduled surveillance of traffic and network elements.


Speakers
FS

Fred Stringer

Architect/Systems Engineer, AT&T- CSO
Fred Stringer is an Individual Contributor Engineer in the Threat Intelligence, Analysis and Response Engineering (TIARE) department in AT&T’s Chief Security Office. He is the Architect of the security data acquisition network and the System Engineer defining security analysis tools... Read More →



Tuesday January 10, 2017 2:00pm - 2:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

2:30pm PST

DISA Cyclops Program

In an IT environment where more and more enterprise IT is located outside the physical confines of enterprises, security data collection and sensing has to follow. “Cyclops" is the US Department of Defense’s solution to migrate the collection of unsampled flow data, network metadata, and security analysis to the "cloud." Sylvia Mapes (DISA), Alan Fraser (CenturyLink), and Greg Virgin (RedJack) will discuss the challenges and solutions of flow data “as-a-service,” including deployment strategies, analysis strategies, and coping with the massive scale of malicious activity on ISP-sized network connections.


Speakers
AF

Alan Fraser

IT Systems Engineer, CenturyLink
Alan Fraser is the lead Security Engineer at CenturyLink supporting DOD operations.  He provides strategic operational and planning support for network and security activities to a variety of federal programs.  With an operational background, he's evolved the partnership between... Read More →
GV

Greg Virgin

Founder/CEO, RedJack, LLC
Greg is the Founder and CEO of RedJack, LLC, a cybersecurity company focused on data analysis on enterprise networks. Greg works to identify attack behaviors and discover critical enterprise assets and vulnerabilities through a unique sensor platform deployed to government and commercial... Read More →



Tuesday January 10, 2017 2:30pm - 3:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

3:30pm PST

Using flow for realtime traffic management in 100G networks
Enterprise networks with speeds up to 100 gigabits per second are now moving into wide-scale deployment. However, these high-speed links are a challenge to monitor and control. Fortunately, the parallel emergence of software-defined networking (SDN) protocols makes it possible to consider not only dynamically reconfiguring the network, but also transparently managing individual connections. An ability to make these decisions in realtime would permit more efficient use of available bandwidth especially high-cost links.

In this talk we present the results of experiments in realtime traffic managment on the Stanford University network. An illustrative example test case is the management of "elephant flows". This term is used to describe the large file transfers and streaming sensor data often seen on research networks. In an elephant flow, the connection typically starts out with small exchanges for authentication and resource allocation, then switches into a phase of high-volume data transfer, ramping down at the end with confirmations and teardown. The goal of the system is to identify the changing characteristics of the flow and move it to a link appropriate to its size. We observe that the fields and metrics in argus flow status records
which are emitted periodically during connections contain the fundamental data necessary to make such bandwidth-based decisions. The decisions can then be turned into dynamically-issued commands to an OpenFlow controller. The presentation will describe the experimental setup; the challenges of flow collection at 100G; and report on the effectiveness of dynamic flow management.

Bandwidth management for individual connections is not the only kind of realtime intervention which could be envisioned as SDN provides mechanisms applicable to a variety of tasks.The presentation concludes with a look at other opportunities drawn from traditional traffic engineering and network security.

Speakers
avatar for John Gerth

John Gerth

Stanford University
John Gerth is the Information Security Officer for the Electrical Engineering and Computer Science departments at Stanford University. He designed and deployed their network flow collection system and has worked with law enforcement on criminal investigations. He is a member of the... Read More →
avatar for Johan van Reijendam

Johan van Reijendam

Stanford University
Johan van Reijendam is a manager in the network engineering organization of Stanford UIT responsible for backbone networks. His research interests include high-performance networks and their management.



Tuesday January 10, 2017 3:30pm - 4:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

4:00pm PST

Metrics-Focused Analysis of Network Flow Data
This presentation discusses the use of network management metrics, and how these metrics may influence, or be influenced by, the analysis of network flow data. Using the Security Content Automation Protocol (SCAP) as a base set of data definitions for metrics description, the talk walks through several candidate metrics and describes how they can be used in network flow analysis. These metrics range from ones derived from host population counts to others derived from vulnerability scans. The talk includes several manufactured examples that were derived from real situations.

Speakers
avatar for Timothy Shimeall

Timothy Shimeall

Senior Network Situational Awareness Analyst, CERT Program at Software Engineering Institute
The only person to make more than ten consecutive appearances at FloCon, Tim is the Senior Network Situational Awareness Analyst of the CERT Program at the Software Engineering Institute (SEI). Tim is responsible for developing methods to support decision making in security at and... Read More →



Tuesday January 10, 2017 4:00pm - 4:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

4:30pm PST

Running Reliable Network Security Monitoring Infra @ Facebook
Packet monitoring for threat detection is a seemingly simple concept, but effective implementation is not. Reliable and scalable solutions must also carefully consider each hardware and software component individually as well as how they work together on the network. Facebook runs network monitoring system (NSM) infrastructure across multiple sites around the world. How do we ensure all our traffic is monitored for incidents -- packets
loss/drop, network blind spots, missing network coverage, etc. -- and quickly provide accurate results to our security analysts? In this talk, we will explain how we run NSM infrastructure at Facebook scale to monitor our global infrastructure. We will define matrices for reliability and how we collect statistical data from different hardware appliances and NSM applications including Bro, an open source network security platform, and Suricata an open source intrusion detection/prevention system. We'll walk through how we verify the integrity of the data and then use it to build statistical models for creating actionable alerts when an abnormality is detected. We will share real-world scenarios that we have seen on our
networks, how we resolved those issues, and what we learned from these events.

Speakers
ST

Sereyvathana Ty

Facebook
Sereyvathana Ty is a member of Detection Infrastructure at Facebook working on network security monitoring instrumentations. Before joining Facebook, he was a malware researcher for Palo Alto Networks where he was researching new techniques for detecting malware and developing mitigation... Read More →



Tuesday January 10, 2017 4:30pm - 5:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.
 
Wednesday, January 11
 

8:30am PST

A Network Flows Visualization Framework and API for Network Forensics and Analytics in the Web
High performance data networks such as Science DMZ networks are being deployed in research institutions all over the nation to provide high speed big data transfer among intra and inter institutional collaborations. The amount of network data generated by such networks is very costly to store and/or process to provide network security, network situational awareness, and network forensics. These research networks can not rely their security on traditional firewalls because firewalls tend deter the data transfer performance.Visualization analytics play a major role in the detection of events in big data as it has been in network visualizations. To help with the analysis, we present an API, that uses SILK as its base, with functions to filter network flows through a web interface and feed the output to web visualizations thereby (1) giving the power to non shell savvy system administrators to manage network flows data from the web, (2) providing a bridge between the processing of big network data and the visualization analytics researchers, (3) providing network analysis as a web service in the cloud.

Speakers
avatar for Ian Dávila

Ian Dávila

Ian Davila is a junior undergraduate student in Computer Science at the University of Puerto Rico, Rio Piedras, USA. He worked under the supervision of Dr. Jose Ortiz where he implemented several visualizations for web based applications for network security. This past summer, he... Read More →
avatar for Julio J De la Cruz Natera

Julio J De la Cruz Natera

Julio de la Cruz is a senior undergraduate student in Computer Science at the University of Puerto Rico, Rio Piedras, USA. In the summer of 2015 he participated in the NIST Summer Undergraduate Research Fellowship where he worked with Dr. Michaela Iorga creating a visualization of... Read More →
avatar for José Ortiz Ubarri

José Ortiz Ubarri

José Ortiz-Ubarri is an Associate Professor at the University of Puerto Rico, Rio Piedras, USA. He received a B.S. degree in Computer Science from the University of Puerto Rico (UPR), Rio Piedras in 2003, and a PhD degree in Computer Science and Engineering from the University of... Read More →



Wednesday January 11, 2017 8:30am - 9:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:00am PST

Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Netflow was designed to retain the key attributes of network conversations between TCP/IP endpoints on large networks without having to collect, store, and analyze all of the network's packet-level data. Over time, however, demand has increased for a platform that can support analytical workflows that make use of attributes beyond the transport layer. With the advent of template-based flow formats such as IPFIX, flow collectors are capable of collecting and exporting some of these attributes, but retaining finer-grained details of network conversations in a more flexible format has made efficient storage and analysis of this data at scale challenging.

The Mothra network analysis platform, built on the Apache Spark cluster computing framework, enables scalable analytical workflows that extend beyond the limitations of conventional flow records. In this presentation, I will describe the Mothra architecture and demonstrate some of its capabilities, with a focus on how the platform can provide for increased analytical fidelity, simplified sharing of analysis techniques and results, and
reduced training time for new analysts.

Speakers
avatar for Anthony Cebzanov

Anthony Cebzanov

Engineer, CERT Division, Software Engineering Institute
Tony Cebzanov is a Member of the Technical Staff at Carnegie Mellon University’s Software Engineering Institute. As a software engineer working for the CERT Security Automation Directorate, Tony develops software systems used to detect and mitigate network security threats. Tony... Read More →



Wednesday January 11, 2017 9:00am - 9:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:30am PST

Low Hanging Fruit Tastes Just as Good
We often hear as some network security tasks as being "low hanging fruit." There are network monitoring tasks that seem simple, but the work is tedious or requires significant time to produce results that they never get the time and effort they deserve. Taking the time to accomplish these seemingly simple tasks can provide valuable situational awareness. We used the CERT NetSA security tool suite to monitor traffic and establish baselines of our internal network IP addresses. By deriving simple network statistics of each IP address, we are able to automate alert generation when an anomalous behavior is detected. Similarly, we are able to build lists of all of the domains queried from our network. After enough time, any new domains, and changes to the previously seen domains, are worth investigating. This talk demonstrates the steps we took to perform this analysis with our publicly available tool suite.

Speakers
avatar for Dan Ruef

Dan Ruef

Network Security Test Engineer, CERT Division, Software Engineering Institute
Dan Ruef is a member of the Security Automation Directorate at SEI/CERT. He graduated with a master's degree in Information Security Technology from Carnegie Mellon University (2006) and Bachelor of Science degree in Mathematics and Computer Science from Case Western Reserve University... Read More →
avatar for Emily Sarneso

Emily Sarneso

Network Security Software Developer, CERT Division, Software Engineering Institute
Emily Sarneso is a member of the Security Automation Directorate at SEI/CERT. She graduated with a master's degree in Information Science from the University of Pittsburgh (2009) and a bachelor's degree in Mathematics from Saint Vincent College (2007). Emily is the lead developer... Read More →



Wednesday January 11, 2017 9:30am - 10:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

10:00am PST

Next Generation Incident Response: Tools and Methods for Hunting and Responding to Advanced Threats
AT THE REQUEST OF THE PRESENTERS, THESE SLIDES WILL NOT BE SHARED.
The cyber threat landscape is constantly shifting. Attackers continually develop new tactics, tools, and procedures (TTPs) to breach and gain entry into systems. This requires incident response teams to be able to adapt and respond to these agile and dynamic threats on a daily basis. The National Cybersecurity and Communications Integration Center's (NCCIC) Hunt and Incident Response Team (HIRT) is the primary source of agile and dynamic incident response and hunt services to the entire federal network space. In this capacity, it is necessary for HIRT to assess and adapt to the myriad of operational hurdles caused by the dynamic nature of the adversary and the uniqueness of every client network that it deploys to. Adaptation to these variables is achieved in two ways by the NCCIC HIRT. Foremost, a sound methodology for ad hoc deployment to client networks must be established. This methodology will serve as the foundation for all hunt and incident response operations. Lastly, integration and correlation of data from disparate sources must occur for success to be achieved. Host based, network flow, infrastructure devices, and intelligence sources must all be utilized in conjunction with one another to achieve success in the field. NCCIC HIRT must utilize custom hardware and software solutions and accompanying analysis and deployment methodologies for all components of the mission to work seamlessly. Next generation incident response kits and accompanying methodologies and workflows have been developed to combat this constantly changing threat landscape.

Speakers
CK

Casey Kahsen

Northrop Grumman
Casey has over 7 years of experience in digital forensics and cyber operations. He has been supporting the Department of Homeland Security with Northrop Grumman for over two years. During this time he has supported projects including cyber hygiene and threat reporting, automated indicator... Read More →
DP

David P Zito

Senior Incident Response Analyst, Northrop Grumman
David graduated from Longwood University in 2007 with a Bachelor’s Degree in Computer Science.  He went on to receive his Master’s Degree in Cyber Security from University of Maryland University College in 2013.  In addition to his degrees, David also holds the GIAC Certified... Read More →


Wednesday January 11, 2017 10:00am - 10:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

11:00am PST

Delivering Cyber Warfighting Capability from Seabed to Space
Pat Sullivan of SPAWAR will be providing a Keynote Presentation

Speakers
avatar for Pat Sullivan

Pat Sullivan

Executive Director, Space and Naval Warfare Systems Command (SPAWAR)
Mr. Patrick M. Sullivan is currently the Executive Director for the Space and Naval Warfare Systems Command. In this role, he shares responsibility for over 9,600 civilian and military personnel and a budget of over $10 Billion, dedicated to the acquisition, delivery and sustainment... Read More →



Wednesday January 11, 2017 11:00am - 12:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:00pm PST

Flow-Based Monitoring, Troubleshooting and Security using nProbe
Flow-based network traffic monitoring plays a crucial role when it comes to troubleshoot application problems, investigate security incidents, and comply with industry and government regulations. However, most flow-based probes embedded in network devices are limited to basic counters such as packets and bytes. Alongside of this, probes embedded in security devices often produce 'event-driven' flows based on the firewall status (e.g. when a connection is created/deleted from the firewall table), making measurements complicated without adding any specific security information elements, beside DPI.

For years both research and industry have been focusing on how to overcome the limitations of flow devices. We have decided to focus on the 'augmented' flow generation using both raw packets and other sources of network data (e.g. sFlow- and NetFlow-capable devices), as we believe that rich flow generation is the first step towards the next-generation traffic monitoring. With this belief at the core our our mission, we created the nProbe family of flow-based traffic monitoring software, efficient enough to keep up with the latest 100 Gbit technologies, while being able to enrich flows with hundreds of new information elements.

nProbe is a family of software-based flow collectors and probes able to handle standard and extended flow formats (e.g. those produced by Cisco ASA devices and PaloAlto firewalls). It contextualizes and harmonizes heterogeneous data into 'augmented' flows enriched with information (almost 300 information elements are supported by nProbe) on Layer-7 applications, telemetry data, DNS queries, HTTP URLs, SSL/TLS certificates and more for real traffic troubleshooting and security analyses. Lua scriptability enables custom applications to leverage on the framework to create monitoring solutions directly on the probe, rather than using the classic flow-probe/flow-collector model that is less efficient
and cannot timely execute actions on monitored data. nProbe can also deliver augmented flow data in standard formats to simple text files and syslog, as well as to more sophisticated Apache Kafka clusters, MySQL, ElasticSearch and Splunk. This great flexibility allows companies to quickly, efficiently and seamlessly integrate the software in their existing infrastructures.

Speakers
avatar for Luca Deri

Luca Deri

Software Engineer, ntop
Luca Deri is the leader of the ntop project (www.ntop.org), aimed at developing an open-source monitoring platform for high-speed traffic analysis. He worked for University College of London and IBM Research prior to receiving his PhD at the University of Berne with a thesis on software... Read More →



Wednesday January 11, 2017 1:00pm - 1:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:30pm PST

Navigating the Pitfalls and Promises of Network Security Monitoring
Network security monitoring has been around for decades, but the data generated from high volume sources such as cloud, mobile and IoT creates a brand new set of challenges. This presentation will explore how companies can begin to fix these visibility issues using the Bro
open-source network security monitoring framework to perform dynamic targeted logging and enable the cyber hunting mission.

Too often, network admins are forced to choose between a thorough analysis or a fast analysis. With the ability to manually review 10, 100 or even 200 potential events per day, an admin's bandwidth is stretched thin, leaving many opportunities for error. Coupled with the high volume of data from modern sources, there is a lack of confidence that traditional detection methods will catch every threat.

Instead of manually searching PCAP logs or summarizing network traffic with NetFlow, Bro allows organizations to gather detailed metadata on network traffic from multiple protocol layers. Bro can be leveraged to look for events that occurred within the last 6 months to last 6 minutes, thus enabling the cyber hunter's mission. By combining targeted logging with ability to filter, analyze and enrich with potential indicators of compromise, analysts get more information to prioritize and respond. With targeted logs, automated analysis becomes both more feasible and effective than traditional full-take log anomaly detection.

Combined with the right cyber hunting approach, analysts can gain new visibility into threats that have existed in the network for a long time or focus on catching threats near the moment of compromise. Our approach allows you to automate the process of sifting through months of data to find evidence of a breach.

Attendees will learn how to solve the high volume data issue associated with network monitoring and become more efficient cyber hunters. We will walk through several examples of where targeted logging clearly discovers and confirms malicious activity, and will show examples of Bro logging, filtering and automated analysis techniques used and discuss real-word use cases accompanied by statistical information demonstrating data reduction.

Speakers
SB

Scott B Miserendino

Chief Data Scientist, BluVector
Dr. Scott Miserendino serves as BluVector’s chief data scientist. His responsibilities are to enhance the analyst’s ability to identify, reason over and act on previously unknown threats. He leads the development of BluVector’s machine learning-based analytic engines and other... Read More →



Wednesday January 11, 2017 1:30pm - 2:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

2:00pm PST

echo 'PCAP cant scale'| sed 's/cant/does/'
Companies need reliable packet capture to maintain an accurate source of truth for what happened on their networks. Netflow can't recreate that tarball deleted off your server once attackers finished their exfiltration, and it's not always detailed enough for writing an IDS signature with. "Capture All the Things!" seems impossible to scale in the real world, even host-based IDS and network logging are incomplete solutions. This leaves incident response teams with conjecture "we saw traffic on this port, but we don't know what it was."

Historically, scaling packet capture infrastructure to meet corporate network demands is a significant challenge. Physical space for infrastructure is limited, traffic rates are too high to maintain meaningful retention windows, and cost is prohibitive. Additionally, how do you efficiently query petabytes of data in time to resolve an incident?

To address this problem, our in-house security team built a scalable, cost-effective, multi-petabyte solution using the Open Compute Project. This presentation will walk you through the architecture and design decisions that helped us build a packet capture infrastructure capable of handling tens of Gbps per host and providing retention measured in petabytes. This solution automatically delivers packets to analysts and responders, so they can quickly identify and report the truth of what happened during an incident.

Speakers
EW

Erik Waher

Erik is a security engineer with a love of all things on the network.



Wednesday January 11, 2017 2:00pm - 2:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

2:30pm PST

Flow Collection and Analytics at Verizon

Verizon Network Security Services collects netflow from internal devices, edge routers and the Internet backbone. The group is also the central repository for logs in hundreds of formats and thousands of machines, including firewalls, IDS engines, web proxies, SNMP managers, BGP aggregators, DNS servers and desktops. Deriving useful information from all this data is a task shared by the data owners, repository operators and security analysts.

In this presentation we will go over the growth of the Verizon Network Security data repository; the infrastructure in place that receives and processes 100GB of data an hour, including two billion flows. We will also cover some of the open source, commercial and homegrown software that helps the security, network planning, and network performance teams gain insight into the current state of networks from local offices to the Internet.

We will also discuss some of the challenges encountered along the way, various attempts to make searching flow faster, and some recent developments using machine learning to identify attacks on the network.


Speakers
DM

Dennis Marti

Verizon
Dennis is a member of the Wireline Network Security team within the Verizon Services Organization. For the past eight years he has helped build the network security data repository. Prior to Verizon, he worked for companies building network encryption products, and one or two providing... Read More →



Wednesday January 11, 2017 2:30pm - 3:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.
 
Thursday, January 12
 

8:30am PST

Uncovering Beacons Using Behavioral Analytics and Information Theory
A beacon, or a heartbeat, is machine-generated traffic leaving the network to confirm availability to or seek new instructions from an external system. Beacons may be used for innocuous purposes (such as checking for Microsoft updates) or for malicious purposes (such as registering an infected host to a C2 server). In this presentation, we will demonstrate how to detect beacons using a combination of packet count entropy, producer-consumer ratios, and dynamically generated hostname detection across a Bro dataset. Packet count entropy is used to measure variance in the number of packets transmitted in a set of connections, with the assumption being that human driven traffic will exhibit a wide distribution of different packet counts across connections and beaconing traffic will exhibit a comparably low distribution of different packet counts. Producer-consumer ratios compare the number of bytes leaving a client with the number of bytes returning to a client to detect clients regularly transmitting data outward without receiving data in return. Dynamically generated hostname detection looks for hosts with machine-generated hostnames to root out hosts that may attempt to escape detection by constantly changing hostnames. We combine these three independent signals to detect potential hosts that are attracting beacon connections from inside our network. We can then crossreference this data against open-source and proprietary threat intelligence to detect possible C2 servers.

In this presentation, we will demonstrate that these tasks can be accomplished using a small number of SQL scripts that can be easily parameterized, with results aggregated by a Python or shell script. As such, they can easily be automated to run on a set frequency or when new batches of data are available.

Speakers
ED

Eric Dull

Specialist Leader, Deloitte & Touche, LLP
Eric Dull is a Specialist Leader at Deloitte, leading large-scale data science and cyber security applications for a variety of United States Government and commercial clients. He is an expert in applied graph theory, data mining, and anomaly detection.  His work includes machine... Read More →
BS

Brian Sacash

Specialist Senior, Deloitte & Touche, LLP
Brian Sacash is a Specialist Senior at Deloitte, focusing on data science and software development in the cyber security sector. He has experience employing natural language processing, statistical analysis, and machine learning, using big data technologies, for analytic-based decision... Read More →



Thursday January 12, 2017 8:30am - 9:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:00am PST

Discovering Deep Patterns in Large-scale Network Flows using Tensor Decompositions
We present an approach to a cyber security workflow based on ENSIGN, a high-performance implementation of tensor decomposition algorithms that enable the unsupervised discovery of subtle undercurrents and deep, cross-dimensional correlations within multi-dimensional data. This new approach of starting from identified patterns in the data complements traditional workflows that focus on highlighting individual suspicious activities. This enhanced workflow assists in identifying attackers who craft their actions to
subvert signature-based detection methods and automates much of the labor intensive forensic process of connecting isolated incidents into a coherent attack profile.

Tensor decompositions accept network metadata as multidimensional arrays, for example sender, receiver, port, and query type information, and produce components - weighted fragments of data that each capture a specific pattern. These components are the product of computationally intensive model-fitting routines that, with ENSIGN, have been aggressively optimized for the cyber domain. What ENSIGN provides is superior to other classical unsupervised machine learning approaches, such as dimensionality reduction or clustering, in that a decomposition into components can capture patterns that span the entire multidimensional data space. This can include patterns that reflect multiple sources, multiple receivers, periodic time intervals, and other complex correlations. From unsupervised discovery, domain knowledge attaches meaning to a handful of components each isolating
a key contributing pattern to the overall network flow. In most cases, the story underpinning the existence of a component is a self-evident, easily recognizable pattern of expected, benign activity. However, in other cases, patterns emerge among one or more dimensions  - regular time intervals, a common destination, a common request type - that reflect a deeper, more directed, intent.

Operating last year in the Security Operations Center (SOC) at SCinet - the large-scale research network stood up each year in support of the annual Supercomputing Conference (SC) - ENSIGN analyzed metadata collected for more than 600 million flows over a two-day span. ENSIGN tensor decomposition methods isolated activities of concern including the evolution of an SSH attack from scan to exploitation and a subtle, persistent attempt at DNS exfiltration. We present results from an updated and more advanced deployment of ENSIGN at SCinet as part of SC16. We highlight how the ENSIGN analytics used at SC are suited for automated post-processing and recurrent pattern detection, making them ideal for nightly reports. We demonstrate how novel joint tensor decompositions enable data fusion, allowing patterns to be discovered from multiple data sources with common elements. Finally, we illustrate an end-to-end workflow where ENSIGN builds on R-Scope (www.reservoir.com/product/ensign-cyber), a scalable and hardened network security monitor based on Bro (www.bro.org) that collects the rich contextual metadata crucial to the success of unsupervised discovery, and Splunk as a metadata access store. We show how this combination provides a powerful analytic tool curity professionals in capturing and visualizing - and ultimately comprehending - the patterns contained within the vast volumes of traffic on a large-scale network.

Speakers
JE

James Ezick

Reservoir Labs
James Ezick is the lead for Reservoir's Analytics, Reasoning, and Verification Team. Since joining Reservoir in 2004, he has developed solutions addressing a broad range of research and commercial challenges in verification, compilers, cyber security, software-defined radio, high-performance... Read More →



Thursday January 12, 2017 9:00am - 9:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

9:30am PST

Scalable Temporal Analytics to Detect Automation and Coordination
Temporal analysis of cyber data can be leveraged in a number of ways to identify automated behavior, to include: periodic, "bursty", and coordinated activity. Malware frequently makes use of regular or periodic polling in order to receive updates or commands. Bursty and coordinated activity can be indicative of scanning, denial of service, as well as exfiltration among victims. Automated behaviors discovered through temporal analysis can be fed into post-processing analytics, such as whitelisting/filtering and clustering, to identify anomalous or outlier automated behaviors on cyber networks.

This presentation will focus on scalable and flexible techniques for applying analytics on various types of logs/features, as well as methodologies to further narrow the results to anomalous/outlier cases that may be indicative of a cyber security event. Operational use-cases leveraging these techniques on real-world data will be presented. For example, in Kaspersky's recent (July 2016) report on the "Project Sauron" advanced persistent threat (footnote: https://securelist.com/files/2016/07/The-ProjectSauron-APT_research_KL.pdf) their research identifies the use of DNS and/or HTTP to poll/check-in to C2 at specific times, supporting up to 31 unique date/time parameters. Scalable, flexible temporal analysis of network traffic would allow for identification of such automated behavior.

The specific algorithms used to identify periodic behavior include a Fourier transform used to identify candidate periodicities which are then filtered down and refined using the autocorrelation function of the time series. A fast Fourier transform algorithm is used to compute the transform on each time series while an inverse fast Fourier transform is used on the resulting periodogram to obtain the autocorrelation function. These operations are performed at scale in parallel across millions of entities (e.g. IP addresses). "Bursty" behavior is detected based on comparing time series values to summary statistics of the series over a sliding window in time for each entity. Coordinated activity is found by performing a nearest neighbor search across entities in various metric spaces using Jaccard, Cosine, or Euclidean distance. Distance is measured on feature spaces to include Fourier
coefficients, sets of time stamps where activity is observed or spikes (referred to as time signatures), and shingles of inter-arrival time sequences. The nearest neighbor search is performed using a scalable locality sensitive hashing algorithm that allows us to filter down large sets of data to entities with similar temporal behavior. We can apply this technique across multiple data sources, leveraging the commonality of a time dimension in each, in order to identify entities that are acting in an apparently coordinated manner, while accounting for possible offsets in log synchronization. Post processing on the set of 'similar' entities discovered in this manner may include applying unsupervised learning techniques to flag anomalous coordinated activity as well as supervised techniques to classify coordinated activity that has been whitelisted.

Speakers
LD

Lauren Deason

Data Scientist, DZYNE Technologies
Lauren Deason is a Data Scientist with DZYNE Technologies working on the DARPA Network Defense program, focusing on applying digital signal processing and machine learning techniques to detect automated and coordinated behavior in cyber data. Lauren holds a PhD in Economics from... Read More →



Thursday January 12, 2017 9:30am - 10:00am PST
Great Room V-VIII 7450 Hazard Center Dr.

10:00am PST

'Lions and Tigers and Bears, Mirai!': Tracking IoT-Based Malware w//Netflow
The Mirai malware rose to prominence in late 2016 with record-breaking Distributed Denial of Service (DDoS) attacks from a botnet built largely from the unlikeliest of sources - various linux-based devices that make up the so-called Internet-of-Things (IoT). "Are we vulnerable to Mirai? Do we have any active infections? Are we participating in the DDoS attacks? What can we do to protect ourselves?" These are all questions that should immediately come to mind for IT managers and network defenders. The NCCIC/US-CERT Network Analysis Team leveraged the National Cybersecurity Protection System (NCPS), better know as EINSTEIN, to answer these questions for U.S. Federal Government entities.

This presentation will begin with an overview of Mirai, and why it is notable, and discuss some key aspects of Mirai's behavior from analyzing Mirai source code and community open source research. Next, we will present the analysis methodology that we employed, leveraging both netflow and content-based network traffic analysis to correlate known indicators and infrastructure with behavioral characteristics, and discuss how they were used to complement one another. Finally, we will discuss some lessons-learned and share some thoughts on the future of IoT-based threats and defensive strategies.

Speakers
KB

Kevin Breeden

Kevin Breeden is a network security analyst currently supporting the United States Computer Emergency Readiness Team (US-CERT) Network Analysis branch. Kevin's primary responsibilities are network traffic analysis through various proactive and reactive analysis techniques centered... Read More →



Thursday January 12, 2017 10:00am - 10:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

11:00am PST

Incorporating Network Flow Analysis Into an Insider Threat Program

In recent years, many organizations across government, industry, and academia have recognized the need to build an insider threat program (InTP) to protect their critical assets. Insider threat programs fuse information from across traditionally stovepiped portions of organizations (such as HR, IT, and physical security) to identify technical and behavioral activity of concern. In this presentation, we will discuss how modern insider threat programs, work, what they’re designed to prevent, detect, and respond to, and how Netflow analysis can (and should be) incorporated into an insider threat program.


Speakers
avatar for Dan Costa

Dan Costa

Technical Solutions Team Lead, CERT Division, Software Engineering Institute
Dan Costa is the Technical Solutions Team Lead for the Enterprise Threat & Vulnerability Management team in the CERT Division of the Carnegie Mellon Software Engineering Institute. Dan designs, develop, and transitions tools, algorithms, and exercises that enhance organizations... Read More →



Thursday January 12, 2017 11:00am - 11:30am PST
Great Room V-VIII 7450 Hazard Center Dr.

11:30am PST

Developing Insider Threat Indicators from Netflow

Insider threat analysts look for anomalous behavior and activity across a wide array of data sources – host-based audit logs, human resource management systems, anonymous reporting mechanisms, and even SIEM tools. In this presentation, we will provide examples of how Netflow data can be and has been used to detect anomalous insider behavior and activity, and show how correlating information from other data sources can be used to increase the effectiveness of the Netflow-based indicators.


Speakers
avatar for Dan Costa

Dan Costa

Technical Solutions Team Lead, CERT Division, Software Engineering Institute
Dan Costa is the Technical Solutions Team Lead for the Enterprise Threat & Vulnerability Management team in the CERT Division of the Carnegie Mellon Software Engineering Institute. Dan designs, develop, and transitions tools, algorithms, and exercises that enhance organizations... Read More →



Thursday January 12, 2017 11:30am - 12:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:00pm PST

Detecting Threats, Not Sandboxes: Characterizing Network Environments to Improve Malware Classification
Applying supervised machine learning to network data features is increasingly common; it is well suited for tasks such as the detection of malicious flows and application identification. In these applications, it is essential to avoid biases that can arise due to the fact that different training datasets are obtained in different network environments. Unfortunately, it is not straightforward to understand how these environments can introduce biases; many previous studies have not even attempted to do so. In this work, we focus on the important case of training data obtained from malware sandboxes, and its use in detecting malware communications on enterprise networks. We present techniques to identify data features derived from the TCP/IP, TLS, DNS, and HTTP protocols that are artifacts of network environments, and show data features that are invariant across those environments.

HTTP headers provide a good example; the user-agent is often but not always invariant. The via header, on the other hand, indicates that a flow has passed through a proxy, and thus it is not representative of the application's type or intention, but rather a feature of the network environment. In our datasets, nearly 100% of the enterprise HTTP flows contained the "via" header, but this was uncommon in the malware sandbox dataset. A naïve application of machine learning would use this fact to achieve low error in cross-validation tests, but it would also fail at capturing the concept of maliciousness, and its efficacy on real network traffic would suffer. A similar situation holds for TLS, which contains a complex set of data features. Most Windows sandboxes use the XP version to maximize the probability that the submitted malware sample executes. TLS flows that take advantage of the underlying operating system's TLS library would use an outdated version of SChannel. In the cases where the malware samples use SChannel, offering obsolete TLS ciphersuites is not an inherent feature of the malware, but rather a feature of the sandbox environment. Understanding and accounting for these biases is necessary to create machine learning models that can accurately discern malicious traffic versus that of enterprise traffic, and not simply learn to classify different network environments. In addition to highlighting these pitfalls, we offer solutions to the problems and demonstrate their results. By understanding the target network environment and creating training datasets composed of synthetic samples, we can systematically avoid a sandbox bias. For example, when monitoring a network with a web proxy enabled and where Windows 10 is the most prevalent operating system, we create synthetic HTTP flows by modifying the existing malware HTTP flows to include the appropriate "via" header. Similarly, we modify the TLS ciphersuite offer vector and extensions to resemble the appropriate version of SChannel. Finally, we use the synthetic malware dataset and baseline benign data collected from the enterprise network to create robust machine learning classifiers that can be deployed on the enterprise network.

Speakers
avatar for Blake Anderson

Blake Anderson

Cisco Systems Inc.
Blake received his PhD from the University of New Mexico. In his dissertation, he developed novel machine learning techniques and applied these techniques to classify, cluster, and find phylogenetic relationships on malware data. Blake spent time performing security research at Los... Read More →
avatar for David McGrew

David McGrew

Cisco Systems, Inc.
David McGrew is a Fellow in the Advanced Security Research Group at Cisco, where he works to improve network and system security through applied research, standards, and product engineering.  His current interests are the detection of threats using network technologies and the development... Read More →



Thursday January 12, 2017 1:00pm - 1:30pm PST
Great Room V-VIII 7450 Hazard Center Dr.

1:30pm PST

I Want Your Flows To Be Lies
Real time and recorded flow data can be an incredible boon to systems administrators, by providing a comprehensive vision of how a network functions, or fails to function. Changes in flow data can also be used, to detect anomalous behavior like an intruder, a data exfiltration attempt, or a DDoS attack. All of this is great. So why do I want to fill your flow data with lies?

Flow data provides exactly the same information to an attacker: what servers are important, where the interesting data lies. This data is one reason that sophisticated attackers target routers as one of their first targets: what a great source of information about what is important on the network! Suddenly it is easy to distinguish high-value servers from low-value servers, and real machines from honeypots.

CyberChaff and Prattle are novel network defense solutions that work by creating fake nodes and fake traffic into your networks, to mask the true topology and direct attackers towards alarms. In this talk, I describe how we can use this same infrastructure to mask the real flows on your network, decreasing their value to an adversary and hiding the defensive areas of your network. I'll even show you how to hook your flow data back into Prattle, to ensure that nothing stands out to the attacker.

And then, finally, I'll show how you how you can get the information you wanted back without tipping your hand.

Speakers
avatar for Adam Wick

Adam Wick

Galois, Inc.
Adam Wick leads the systems software group at Galois, Inc., an R&D company in Portland, OR. Galois does research in formal methods, programming language development, operating systems, compiler engineering, and security. Dr. Wick has worked in a variety of fields at all level of the... Read More →



Thursday January 12, 2017 1:30pm - 2:00pm PST
Great Room V-VIII 7450 Hazard Center Dr.

2:30pm PST

Conference Close
Thanks for attending FloCon 2017. See you all in Tucson for FloCon 2018!


Thursday January 12, 2017 2:30pm - 2:35pm PST
Great Room V-VIII 7450 Hazard Center Dr.
 
Filter sessions
Apply filters to sessions.