News and Events

Lecture Series

Local tools


 

 

Upcoming Seminars:

Seminar by Dr. Joseph Peters (School of Computing Science, Simon Fraser University)

OPTIMIZING ENERGY AND BANDWIDTH IN MOBILE STREAMING SYSTEMS

Wednesday, July 16th, 2014, 11:00AM, EV 1.162

 

Abstract:

Dramatic recent improvements in the computing power, memory capacity, screen size, and video quality of mobile devices have resulted in substantial demand for mobile multimedia services. However, these improvements have also resulted in increased demands for limited and expensive wireless bandwidth and for the energy of mobile devices with limited battery capacities. In this talk, I will consider the problem of multicasting multiple video streams from a wireless base station to many mobile receivers over a common wireless channel. I will present a sequence of increasingly sophisticated approaches to the problem of optimizing both the bandwidth utilization of the wireless channel and the energy usage of the mobile receivers.

Bio:

Joseph Peters received a B.Math. degree from the University of Waterloo and the M.Sc. and Ph.D. degrees from the University of Toronto, all in Computer Science. He is currently a professor in the School of Computing Science at Simon Fraser University near Vancouver. His research interests include the modelling and performance analysis of communication networks, communication algorithms, distributed computation, combinatorial approximation, and graph theory. Recently, he has been investigating multimedia networking.

 

____________________________________________

 

Seminar by Dr. Joseph Peters (School of Computing Science, Simon Fraser University)

 

SPREADING INFLUENCE IN SOCIAL NETWORKS WITH TIME CONSTRAINTS

Monday, July 14th, 2014, 14:00, EV3.309

 

Abstract:

In a social network, agents change their behaviours and opinions on the basis of information collected from their neighbours. Generally, recent information is more influential than older information, and information that is received in a short period of time is more influential than information received during a long period of time. An example of this phenomenon is consumer reviews on websites such as Amazon. Another example is viral marketing which attempts to influence consumer adoption of products. A third example is recent communication strategies of politicians.

In this talk, I will present a graph-based model of the spread of influence in networks that generalizes previous research by including temporal information. The goal is to identify a small set of nodes that eventually influences all nodes in the graph with the restriction that influence only lasts for a bounded time interval. The problem for general graphs is computationally difficult even for approximate solutions. The talk will focus on efficient algorithms for restricted
families of graphs: paths, rings, trees, and complete graphs.

Bio:

Joseph Peters received a B.Math. degree from the University of Waterloo and the M.Sc. and Ph.D. degrees from the University of Toronto, all in Computer Science. He is currently a professor in the School of Computing Science at Simon Fraser University near Vancouver. His research interests include the modelling and performance analysis of communication networks, communication algorithms, distributed computation, combinatorial approximation, and graph theory. Recently, he has been investigating multimedia networking.

 

____________________________________________

 

Past Seminars:

 

Seminar by Dr. Jorge Bernardino (Instituto Superior de Engenharia de Coimbra)

 

INCREASING SPEEDUP AND CONFIDENTIALITY IN DATA WAREHOUSING

Monday, June 16, 2014, 10:30AM, EV3.309

 

Abstract:

Data warehouses integrate massive amount of data from multiple sources, which is primarily used for decision support purposes. These large data volumes bring significant challenges to database engines, requiring high level of query speedup. Data warehouses must have efficient Online Analytical Processing (OLAP) tools to process complex analytical queries satisfying the information needs of business managers and helping them to make faster and more effective decisions. Typical warehouse queries are very complex and ad hoc in nature and generally access huge volumes of data and perform many joins and aggregations.Improving query speedup in such environments is very difficult and can only be achieved by a combination of different approaches, in particular the use of materialized views, advanced indexes and parallel query processing. However, achieving quick response time with complex OLAP queries is still an open issue. In this presentation we proposed a technique to solve this problem, called DWS-Data Warehouse Stripping. We analyse the scalability performance of data warehouse striping system (DWS) system using different environments. By other hand, we may state that DWs are the core of sensitive business data and store the secrets of the business itself. Data confidentiality focuses on protecting data from unauthorized disclosure. Consequently, securing DWs against data damage and information leakage is a critical goal.

We propose a Specific Encryption Solution tailored for Data Warehouses (SES-DW) consisting on a lightweight encryption cipher for numerical values, which uses only mixes of standard SQL operators such as eXclusive OR (XOR) and modulus (MOD), together with additions and subtractions, that aims at balancing the tradeoff between data security and database performance. Experimental evaluation demonstrates the proposed techniques outperform standard and state-ofthe- art research while providing substantial security strength.

Bio:

Jorge Bernardino received the PhD degree in computer science from the University of Coimbra in 2002. He is a Coordinator Professor at ISEC (Instituto Superior de Engenharia de Coimbra) of the Polytechnic of Coimbra, Portugal. His main research fields are big data, data warehousing, business intelligence, open source tools, and software engineering, subjects in which he has authored or co-authored dozens of papers in refereed conferences and journals. Jorge Bernardino has served on program committees of many conferences and acted as referee for many international conferences and journals. He was President of ISEC from 2005–2010. Currently, he is serving as General Chair of IDEAS’2014 conference and visiting professor at Carnegie Mellon University (CMU).

 

____________________________________________

 

Seminar by Dr. Virendrakumar C. Bhavsar (University of New Brunswick)

 

SIMILARITY OF WEIGHTED TREE STRUCTURES AND APPLICATIONS

Wednesday, June 11, 2014, 10:30AM, EV3.309

 

Abstract:

The notion of similarity (or matching) has played a very important role from the beginnings ofartificial intelligence. The matching process, depending on the application, may involve keyword matching (e.g. Google), schema matching, taxonomic similarity and ontology matching, or other types of similarities. Clustering forms one of the basic computations in many Big Data applications and it is based on the concept of similarity (or distance).

We have developed novel weighted tree similarity algorithms applicable to many domains, e.g. bioinformatics, e-Business, e-Health, e-Learning and semantic web. We have also carried out high performance implementations of these algorithms on cluster and graphics processing units (GPUs). This talk will present an overview of the algorithms and their applications.

Bio:

Virendrakumar C. Bhavsar received the B.Eng. (Electronics and Telecommunications) from University of Poona, India, and the M.Tech. (Electrical Eng.) and Ph.D. (Electrical Eng.) degrees from the Indian Institute of Technology, Bombay. Dr. Bhavsar was a faculty member at the Department of Computer Science and Engineering, Indian Institute of Technology, Bombay, 1974-83. Since 1983 he has been at the University of New Brunswick, Fredericton, where he is currently a Professor in the Faculty of Computer Science. He was the Dean of the Faculty during 2003-08. He is the founding Director of the Advanced Computational Research Laboratory that has been housing high performance computing systems since 2000. He co-led the bioinformatics component of the Canadian Potato Genomics project. He has been also involved in the Atlantic Computational Excellence Network (ACEnet) – about $30 million high performance computing initiative in Atlantic Canada

His current research interests include parallel and distributed processing, artificial intelligence applications in e-Business, e-Learning and bioinformatics, and the semantic web. He has authored over 150 research papers in journals and conference proceedings and has edited three volumes.

 

____________________________________________

Seminar by Professor Hai Zhuge (Nanjing University of Posts & Telecommunications)

 

DIMENSIONS ON TEXTS

Friday, April 25, 2014, 10:30AM, EV3.309

 

Abstract:

Summarization is a key feature of human intelligence. With rapid and continual expansion of texts in cyberspace, automatic text summarization becomes more and more desirable. Traditional automatic summarization methods process texts empirically while neglecting fundamental characteristics and principles in language use and understanding. This talk summarizes previous research methods in a multi-dimensional classification space, unveils the limitations of previous methods, introduces fundamental characteristics and principles, and proposes a summarization methodology including principles, strategies, rules, research methods, system framework, principles of evaluation, and necessity of summarization. The basic viewpoints include: (1) text is understood from outside of the text, (2) summarization is an open social process of building citations from one text to another text or a set of texts, (3) automatic summarization has a limitation, and (4) research should link text to cyberspace, physical space and social space to approach the limitation. By studying the summarization of pictures, videos and graphs, studies converge to a general summarization method. This talk will further discuss some fundamental issues and methods for processing texts considering human cognition, knowledge and semantics.

Bio:

Hai Zhuge is the pioneer of Cyber-Physical Society research and Knowledge Grid research. He invented the Multi-Dimensional Classification Space and the Semantic Link Network Model as the fundamental models to manage various resources in Cyber-Physical Society. He is the author of The Knowledge Grid: Toward Cyber-Physical Society. He is an ACM Distinguished Scientist, ACM Distinguished Speaker, and a Fellow of British Computer Society. He is a joint professor of Nanjing University of Posts and Telecommunications and The Key Laboratory of Intelligent Information Processing in Chinese Academy of Sciences. He presented 15 keynotes at international conferences. He received Wang Xuan Award of China Computer Federation. He was awarded a Distinguished Visiting Fellow of Royal Academy of Engineering in 2013. He is serving as an associate editor of IEEE Intelligent Systems and steering the International Conference on Semantics, Knowledge and Grids. Email: zhuge@ict.ac.cn. Webpage: www.knowledgegrid.net/~h.zhuge.

 

____________________________________________

Distinguished Seminar by Dr. Avi Wigderson (Institute for Advanced Study)

 

RANDOMNESS

Friday, March 28, 2014, 6pm, EV 1.605

 

Abstract:

Is the universe inherently deterministic or probabilistic? Perhaps more importantly - can we tell the difference between the two?

Humanity has pondered the meaning and utility of randomness for millennia. There is a remarkable variety of ways in which we utilize perfect coin tosses to our advantage: in statistics, cryptography, game theory, algorithms, gambling... Indeed, randomness seems indispensable! Which of these applications survive if the universe had no randomness in it at all? Which of them survive if only poor quality randomness is available, e.g. that arises from "unpredictable" phenomena like the weather or the stock market?

A computational theory of randomness, developed in the past three decades, reveals (perhaps counter-intuitively) that very little is lost in such deterministic or weakly random worlds. In the talk I'll explain the main ideas and results of this theory.

The talk is aimed at a general audience, and no particular background will be assumed.

 

Bio:

Dr. Avi Wigderson is a Professor at the School of Mathematics, Institute for Advanced Study, Princeton. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. He was awarded the Godel Prize, the Conant Prize, and the Nevalinna Prize for his contributions to theoretical computer science.

none

 

____________________________________________

 

Seminar by Dr. Horacio Saggion (Universitat Pompeu Fabra)

 

SIMPLIFYING SPANISH TEXTS WITH COMPUTERS

Thursday, March 27, 2014, 2:45PM, EV 11.119

 

Abstract:

Automatic text simplification (ATS) is a complex task which encompasses a number of operations applied to a text at different linguistic levels. The aim is to turn “complex” textual input into a simplified variant, taking into consideration the specific needs of a particular target user or task.ATS can serve as pre-processing tool for other NLP applications but most importantly it can have a social function, making content accessible to different types of users. ATS has been in the NLP research agenda for a number of years and although some progress has been made in different aspects of the text simplification problem, there are still issues to be resolved. In this presentation, I will discuss the problem of text simplification and report on a number of developments at our laboratory to make textual content in Spanish more accessible.

 

Bio:

Horacio Saggion is a Ramón y Cajal Research Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona. He is associated to the Natural Language Processing group, where he works on automatic text summarization, text simplification, information extraction, sentiment analysis and related topics. His research is empirical combining symbolic, pattern-based approaches and statistical and machine learning techniques. Before joining Universitat Pompeu Fabra, he worked at the University of Sheffield for a number of UK and European research projects (SOCIS, MUMIS, MUSING, GATE, CUBREPORTER) developing competitive human languagetechnology. He was also an invited researcher at John Hopkins University for a project on multilingual text summarization. Horacio is currently principal investigator in the EU funded projects Dr Inventor and ABLE-TO-INCLUDE and in the Spanish national project SKATER-TALN-UPF. He was previously scientific coordinator of the Simplext project. He has published over 100 works in leading scientific journals, conferences, and books in the field of human language technology. He is co-editor of a book on multilingual, multisource information extraction and summarization recently published by Springer. Horacio is member of the ACL, IEEE, ACM, and SADIO. He is a regular programme committee member for international conferences such as ACL, EACL, COLING, EMNLP, IJCNLP,IJCAI and is an active reviewer for international journals in computer science, information processing, and human language technology.

 

____________________________________________

 

Seminar by Dr. Peter Gacs (Boston University)

 

CLAIRVOYANT EMBEDDING IN ONE DIMENSIONS

Wednesday, March 12, 2014, 10:30AM-12:00PM, EV 3.309

 

Abstract:

Let v, w be infinite 0-1 sequences, and m a positive integer. We say that w is m-embeddable in v, if there exists an increasing sequence n_{i} of integers with n_{0}=0, such that 0< n_{i} - n_{i-1} < m, w(i) = v(n_i) for all i > 0. Let X and Y be independent coin-tossing sequences. We will show that there is an m with the property that Y is m-embeddable into X with positive probability. This answers a question that was open for a while. The proof generalizes somewhat the multi-scale method of an earlier paper of the author on dependent percolation.

 

Bio:

Before coming to Boston University, Dr. Peter Gacs studied in Budapest, worked at the Hungarian Academy of Science with trips to Moscow, obtained PhD in Frankfurt, did postdoc work at Stanford, and taught in Rochester. Professor Gacs has worked on problems derived from information theory (classical, and algorithmic) and reliable computation. With Ahlswede and Körner, he wrote some of the earliest papers of multi-user information theory. In algorithmic information theory (Kolmogorov complexity), Gacs also had some part in developing the fundamental results (earlier with Levin, later with Vitányi and others). In reliable computation, his main contributions are to the probabilistic cellular automaton model: in some sense the most natural one, but mathematically difficult. He has been the principal investigator of several NSF grants, and is an external member of the Hungarian Academy of Sciences.

 

____________________________________________

 

Seminar by Dr. George Giakkoupis (INRIA Rennes, France)

 

RUMOR SPREADING AND GRAPH EXPANSION

Monday, March 10, 2014, 10:30AM-12:00PM, EV 2.260

 

Abstract:

Randomized rumor spreading is a basic model for information dissemination in networks. Each node periodically contacts a random neighbor, and the two nodes exchange any information they currently have. This allows information to spread in the network in an ``epidemic" style. Randomized rumor spreading provides a simple, scalable, and robust protocol for message broadcasting, which is particularly relevant for large, unknown, or dynamically changing networks. Further, it is interesting from a sociological perspective, as it provides a simple model for how information, rumors, or ideas spread in social networks.

In this talk, I will present some results that relate the speed of rumor spreading, that is, how quickly information spreads from a single source to all nodes in the network, with standard expansion parameters of the network. I will also discuss the impact on rumor spreading of a dynamic network

 

Bio:

George Giakkoupis is a Researcher at INRIA Rennes, in France. He received his PhD from the University of Toronto in 2008, and was a Postdoctoral Fellow at University of Paris VII and the University of Calgary. His expertise is in the design and analysis of algorithms, in particular, randomized and distributed algorithms. He has worked on epidemic protocols, search in social networks, peer-to-peer networks, and shared-memory distributed computing.

 

____________________________________________

 

Seminar by Dr. Petko Bogdanov (University of California, Santa Barbara)

 

MINING AND MODELING IN NETWORKS FROM DIVERSE DOMAINS

Monday, March 3, 2014, 10:30AM-12PM, EV 3.309

 

Abstract:

Graphs can represent relationships between data entities and hence provide an expressive model for big data produced by real-world systems. A number of big data domains feature an inherent network structure that can be modeled as a graph - social networks and media, transportation networks, gene networks, the brain, communication and information networks. Studying, understanding and predicting the inherent processes in all those areas requires scalable graph mining algorithms and models. While a well-suited representation model, graphs present unique algorithmic challenges. Real-world networks feature more than just the structure among entities - they evolve over time and incorporate content and features associated with nodes and edges. To address those challenges, we develop novel formulations and algorithms that scale to large instances while not compromising the quality of mined results.

In my talk, I will present my research on subgraph mining in time-evolving networks and modeling and predicting user behavior in social media. I will also discuss future research directions in this area.

 

Bio:

Dr. Petko Bogdanov is a postdoctoral fellow in the Computer Science department at University of California, Santa Barbara. He is also affiliated with the Network Science Collaborative Technology Alliance (NS-CTA). His research interests are in scalable data mining, data modeling and data management with a focus on graph data and with interdisciplinary applications in bioinformatics, sociology, neuroscience and materials research. He received his PhD and MS in Computer Science from UC Santa Barbara.

 

____________________________________________

 

Seminar by Dr. Karthekeyan Chandrasekaran (Harvard University)

 

OPTIMIZATION: BEYOND HEURISTICS AND DEALING WITH UNCERTAINTY

Monday, February 24, 2014, 10:30AM-12:00PM, EV 3.309

 

Abstract:

Optimization problems are ubiquitous in contemporary engineering. The principal barriers to solving several real-world optimization problems are input uncertainty and large solution space. In this talk, I will address two algorithmic approaches to overcome these difficulties. First, I will give provable guarantees for a well-known heuristic, namely the cutting plane method, to find min-cost perfect matchings. Second, I will present new tools to study probabilistic instances of integer programs.

 

Bio:

Dr. Karthekeyan Chandrasekaran is a Simons Postdoctoral Research Fellow at Harvard University. He obtained his B. Tech. in Computer Science and Engineering from the Indian Institute of Technology, Madras and his Ph.D. in Algorithms, Combinatorics, and Optimization from Georgia Tech. His primary research interests are in optimization, integer programming, probabilistic methods and analysis, and randomized algorithms.

 

____________________________________________

 

Seminar by Dr. Jaroslaw Szlichta (University of Toronto)

 

HOLISTIC AND EXTENSIBLE BUSINESS INTELLIGENCE AND BIG DATA CLEANING

Tuesday, February 11, 2014, 10:30AM-12PM, EV 2.260

 

Abstract:

Understanding the semantics of data is important for optimization of queries for business intelligence and data quality analysis. In this talk, we will present our holistic and extensible business intelligence and data cleaning techniques that help to improve data analysis and data quality, and we will outline future directions in light of the big data era.

As business intelligence applications have become more complex and as data volumes have grown, the analytic queries needed to support these applications have become more complex too. The increasing complexity raises performance issues and numerous challenges for query optimization. We introduced order dependencies (ODs) in data management systems. (ODs capture monotonicity properties in the data.) Our main goal is to investigate the inference problem for ODs, both in theory and in practice. We have developed query optimization techniques using ODs for business intelligence queries over data warehouses. These operations and techniques we have implemented in IBM DB2 engine. We have shown how ODs can be used to improve the performance of real and benchmark analysis queries (providing an average 50% speed up).

Poor data quality is a barrier to effective, high-quality decision making based on data. Current data cleaning techniques apply mostly to traditional enterprise data rather that to big data, which is not only large but also more dynamic and heterogeneous. Declarative data cleaning encodes data semantics as constraints (rules) and errors arise when the data violates the constraints. Declarative data cleaning has emerged as an effective tool for both assessing and improving the quality of data. Recently, unified approaches that repair errors in data and constraints have been proposed. However, both data-only and unified approaches are by and large static. They apply cleaning to a single snapshot of the data and constraints. We have proposed a continuous data cleaning framework that can be applied to dynamic data. Our approach permits both the data and its semantics to evolve and suggests repairs based on the accumulated evidence as statistics. We built a classifier that predicts types of repairs needed (data repair, constraint repair, or hybrid of both) to resolve an inconsistency, and learns from past user repair preferences to recommend more accurate repairs in the future.

 

Bio:

Jarek Szlichta is a Postdoctoral Fellow at University of Toronto working with Professor Renée Miller. His research concerns big data, business intelligence, data analytics, information integration, heterogeneous computing, systems, web search and machine learning. He received doctoral degree from York University. During that time he spent a 3-year fellowship at IBM Centre for Advanced Studies in Toronto. His research at IBM includes optimization of queries for business intelligence, and its focus is on order dependencies. He is a recipient of IBM Research Student-of-the-Year award (2012) "for having insights and perspective that has significantly contributed to IBM in a matter of great importance". Previously he worked at Comarch Research & Development on designing and implementing OCEAN GenRap system, which is an innovative data analytics reporting solution. This work was recognized by receiving the prestigious CeBIT Business Award (2007). For a list of publications, please visit Jarek’s web page:http://www.cs.toronto.edu/~szlichta/publications.html

 

____________________________________________

 

Seminar by Dr. Oliver van Kaick (Tel Aviv University)

 

HIGH-LEVEL REPRESENTATIONS FOR SHAPE UNDERSTANDING

Monday Feb 10, 2014, 10:30AM-12PM, EV 3.309

 

Abstract:

During the last decade, the focus of research in computer graphics has shifted from generating images (rendering) to modeling and creation of 3D content. Modeling is a laborious task where artists need to be highly skilled to use the existing modeling tools, which typically involve working on low-level shape representations, e.g., triangle meshes. A recent trend in computer graphics research is the development of techniques that facilitate the creation of 3D models by manipulating shapes at a higher-level, relieving the users from considerable manual work. In this framework, shapes are represented as a collection of primitives defined at a more semantic level, e.g., shape parts (such as the legs, seat and back of a chair) or structural features (such as the feature contours of a chair). These higher-level representations then allow manipulating shapes at a more abstract level, independently of the underlying low-level representation. However, to create such higher-level representations, we first need to analyze the shapes, learn their semantic parts and the geometric relations among them.

In this talk, I will present our developments towards this goal. First, we introduce an unsupervised co-segmentation technique where we consistently segment a set of shapes coming from the same family. We achieve that by clustering shape parts in a descriptor space, which makes use of diffusion maps to explore the presence of third-party connections between parts. Next, we extend the unsupervised co-segmentation to efficiently incorporate direct user input, to arrive at a semi-supervised co-segmentation approach that allows obtaining a consistent segmentation that is close to error-free. Here, we make use of a spring system to obtain a part clustering that is constrained by the user input. Moreover, we are extending such representations to incorporate more semantics about the shapes. We learn the typical geometric configurations of parts throughout the set of shapes with a series of probability distributions, which can be used in applications such as repository exploration and guided shape editing. I will conclude the talk by giving a perspective on future directions for using such shape representations in content creation.

 

Bio:

Oliver van Kaick received the B.Sc. and M.Sc. degrees in computing science from Universidade Federal do Parana (UFPR), Brazil (2003 and 2005), and a Ph.D. from the School of Computing Science at Simon Fraser University (SFU), Canada (2011). In 2012, Oliver was a postdoctoral researcher at SFU as a MITACS Elevate Fellow, collaborating with his industrial partner PDFTron Inc. on a project on document layout analysis. Currently, he is a postdoctoral researcher at Tel Aviv University as an Azrieli Fellow. Oliver's research interests are concentrated in the area of computer graphics, including topics such as shape analysis, shape matching, and geometric modeling, while his general interests also include computer vision and machine learning. In his work, Oliver has collaborated in the development of techniques for shape matching and correspondence, as well as techniques for higher-level analysis of 3D shapes.

 

 

____________________________________________

 

Seminar by Dr. Amir-Massoud Farahmand (Mcgill University)

 

HOW TO SOLVE HIGH-DIMENSIONAL REINFORCEMENT LEARNING PROBLEMS WHILE AVOIDING THE CURSE OF DIMENSIONALITY?

Friday, Feb. 7, 2014, 10:30AM-12PM, EV 3.309

 

Abstract:

In the 21st century, we live in a world where data is abundant. We would like to take advantage of this opportunity to make more accurate and data-driven decisions in many areas of life such as industry, healthcare, business, and government. This opportunity has encouraged many machine learning and data mining researchers to develop tools to benefit from data, especially for challenging high-dimensional problems. Nonetheless, the focus of research so far has mostly been about the task of prediction and many complex decision-making problems, in particular the sequential ones, remain almost untouched.

In this talk, I introduce Reinforcement Learning as a computational framework to model sequential decision-making problems. I then propose some theoretically sound data-driven algorithms to solve high-dimensional reinforcement learning problems that avoid the so-called curse of dimensionality. These algorithms apply some of the most successful principles from the modern machine learning theory to the more general context of reinforcement learning. Finally, I showcase the wide-range of applications of reinforcement learning problems by demonstrating how the proposed algorithms are applied to problems in healthcare (HIV management) and robotics (navigation).

 

Bio:

Amir-massoud Farahmand is a postdoctoral fellow at the School of Computer Science, McGill University. He received his PhD from the University of Alberta in 2011. His research interests are in machine learning, reinforcement learning and sequential decision-making problems, robotics, and optimization. Amir-massoud is the recipient of Natural Sciences and Engineering Research Council of Canada (NSERC) postdoctoral fellowship. His work received the University of Alberta’s Department of Computing Science PhD Outstanding Thesis Award for the period of 2011–2012, and has been published in top machine learning (MLJ, NIPS, ICML) and robotics (IROS and ICRA) venues. He will soon join the Robotics Institute, Carnegie Mellon University to continue his postdoctoral research.

 

____________________________________________

 

Seminar by Abusayeed Saifullah (Washington University in St. Louis)

 

REAL-TIME WIRELESS SENSOR-ACTUATOR NETWORKS FOR CYBER-PHYSICAL SYSTEMS

Monday, January 20, 2014, 10:30AM-12:00PM, EV 3.309

 

Abstract:

A cyber-physical system employs a tight combination of and coordination between computational, networking, and physical elements. Wireless sensor-actuator networks (WSANs) represent a new frontier of communication infrastructure for cyber-physical systems in many important application domains such as process control, smart manufacturing, and data center management. Sensing and control in these systems need to meet firm requirement on real-time end-to-end communication. WSANs face significant challenges that stem from the requirement on real-time communication and that on cyber-physical co-design due to close coupling between control and communication. In this talk, I will first present a new real-time wireless scheduling theory for fast real-time performance analysis of WSANs. For holistic optimization in wireless control systems under stringent resource constraints, I will also present a scheduling-control co-design approach that integrates real-time scheduling theory, wireless networking, and control in a unified framework. I will then present the design and implementation of a real-time WSAN for power management in enterprise data centers. I will conclude my talk with the future directions of my research on new networking platforms, real-time computing, and large-scale sensing and control for next generation cyber-physical systems.

 

Bio:

Abusayeed Saifullah is a PhD candidate in the Department of Computer Science and Engineering at Washington University in St Louis. Advised by Chenyang Lu, he is a member of the Cyber-Physical Systems Laboratory at Washington University.  Abu's research primarily concerns Cyber-Physical Systems, and spans a broad range of topics in Real-Time Systems, Wireless Sensor Networks, Embedded Systems, and Parallel and Distributed Computing. He received the Best Student Paper Awards at the 32nd IEEE Real-Time Systems Symposium (RTSS) and at the 5th International Symposium on Parallel and Distributed Processing and Applications (ISPA), and Best Paper Nomination at the 18th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS).

 

____________________________________________

Seminar by Dr. Emad Shihab (Rochester Institute of Technology)

 

PRAGMATIC PRIORITIZATION OF SOFTWARE QUALITY ASSURANCE EFFORTS

Monday, January 13, 2014, 10:30AM-12:00PM, EV 11.119

 

Abstract:

Software Quality Assurance (SQA), which involves the processes and methods of ensuring that software does not break and meets its intended purpose, is one of the major focus points of Software Engineering research today. Researchers have found that a software system's past can be a very good indicator of its future quality. For example, prior work showed that the number of pre-release changes is a good predictor of fault-prone files. However, to date these predictions have limited adoption in practice. The most commonly cited reason is that the prediction identifies too much code to review without distinguishing the impact of these defects.

Our work focuses on making SQA research more pragmatic, i.e., making advanced SQA techniques applicable in practical settings. This work is based on our experience working with some of the world's largest software companies - namely Avaya, BlackBerry and Microsoft. We will share experiences and present two pragmatic approaches used to ensure software quality. First, we focus on understanding and identifying high impacting defects, i.e., defects that catch practitioners off-guard. Practitioners are much more interested in these high-impact defects since, as we have found, focusing on high-impact defects reduces the amount of code that needs to be reviewed by 40-50% and helps practitioners focus on defects that significantly impact the quality of their software systems.

In addition, we present a proactive approach where risky changes, i.e., changes that may break or cause errors in the software system, are flagged so defects can be avoided before they are widely integrated into the code. We will present the results of a year-long study involving more than 450 developers, spanning more than 60 teams to better understand and identify these risky changes. We find that attributes such as the number of lines of code added and the history of the files being modified by the change can be used to accurately identify risky changes with a recall of more than 67% and a precision that is 37-87% higher than a baseline model. Our change risk models are being used today by an industrial partner to manage the risk of their software projects.

 

Bio:

Dr. Emad Shihab is a Dr. Emad Shihab is an Assistant Professor in the Department of Software Engineering at the Rochester Institute of Technology, New York, USA. His general research area is Software Engineering. He is particularly interested in Mining Software Repositories, Software Quality Assurance, Software Maintenance, Empirical Software Engineering and Software Architecture. He mines historical project data and applies Data Mining, Artificial Intelligence and Statistical Analysis techniques in order to build pragmatic solutions that practitioners can use to maximize their software quality with the least amount of resources. Some of his research has been done in collaboration with and/or adopted in industry by companies such as Avaya, Microsoft and Research In Motion.

 

 

____________________________________________

 

FILM SCREENING AND DISCUSSION WITH GEORGE CSICSERY  

Tuesday, December 3rd, 2013, 9:30-11:30am, EV3.309
All are welcome

The Department of Computer Science and Software Engineering is happy to welcome George Csicsery for a screening of his biographical documentary film Julia Robinson and Hilbert's Tenth Problem followed by a discussion.
Further details about the film available here.

Bio:

George Paul Csicsery (http://www.zalafilms.com) is a writer and independent filmmaker. He has produced 32 documentaries on historical, ethnographic, and cultural and mathematical subjects, including "Where the Heart Roams" (1987), "Hungry for Monsters" (2003), "Troop 214" (2008). "The Thursday Club" (2005), and "Songs Along A Stony Road" (2011).

His screenplay "Alderman's Story," about events surrounding King Philip's War, was awarded first prize at the Rhode Island International Film Festival Screenplay Competition (2005). His films on mathematical subjects include "N is a Number: A Portrait of Paul Erdõs" (1993), which received extensive television distribution in the U. S. and elsewhere, including on the Sundance Channel, and public television syndication via APT. "Taking the Long View: The Life of Shiing-shen Chern" (2011), a portrait of mathematician S. S. Chern produced for his centenary celebrations with Mathematical Sciences Research Institute is about to be broadcast on public television stations via NETA. "Hard Problems: The Road to the World's Toughest Math Contest" (2008) tells the story of the 2006 U.S. team at the International Mathematical Olympiad. It was produced by the Mathematical Association of America and broadcast through APT in 2009. 

 

____________________________________________

 

Seminar by Chris Develder (Ghent University, Belgium)

 

DIMENSIONING (OPTICAL) NETWORKS FOR CLOUD COMPUTING

Tuesday, November 26, 2013, 2:00PM, EV 2.184

 

Abstract:

The evolution towards grid and cloud computing as observed for over a decennium illustrates the crucial role played by (optical) networks in supporting today’s applications. Yet, traditional solutions to the problem of dimensioning optical networks, such as classical routing and wavelength assignment (RWA) algorithms, cannot be directly applied in these new settings. In this talk, we will explain how two fundamental concepts in cloud computing, namely the anycast routing principle and virtualization, complicate (optical) network dimensioning. Next, we will outline our work in solving the resulting challenges. First of all, we will introduce a generic grid/cloud network dimensioning problem that addresses the extra degree of freedom introduced by anycast, and also incorporates data center server dimensioning into the problem setting. Here, anycast amounts to the fact that we have some flexibility in deciding on the destination (as well as the route towards it) of the grid/cloud traffic. Second, we will indicate how this flexibility can also be exploited to save resources when designing failure-resilient cloud networks. Finally, we will present our ongoing work that considers resilient virtual network mapping.

 

Bio:

C. Develder currently is associate professor with the research group IBCN of the Dept. of Information Technology (INTEC) at Ghent University - iMinds, Ghent, Belgium. He is involved in national and European research projects (IST David, IST Phosphorus, IST E-Photon One, BONE, IST Alpha, IST Geysers, etc.). His research interests include dimensioning, modeling and optimizing optical (grid/cloud) networks and their control and management, smart grids, information retrieval and extraction, as well as multimedia and home network software and technologies.

 

____________________________________________

Seminar by Dr. Samir Sebbah (Oracle America & CISSE Concordia University)

 

OPTIMAL DESIGN OF ETHERNET RING PROTECTION

Thursday, June 20, 2013, 10:00 a.m, EV 3.309

 

Abstract:

Ethernet Ring Protection (ERP) has recently emerged to provide protection switching in Ethernet networks with sub-50 ms failover capabilities. In addition to Ethernet’s cost-effectiveness and simplicity, ERP’s promise to also provide protection in mesh packet transport networks positions Ethernet as a prominent competitor to conventional SONET/SDH and the technology of choice for carrier networks. Higher service availability, however, in ERP has been challenged by the issue of network partitioning and contention for shared capacity caused by concurrent failures. In this talk, we show that in a network designed to withstand only single-link failures, the network services usually suffer from two outage categories subject to dual-link failures. We address the problem of minimal capacity network design to provide high service availability against dual-link failures. We cast this design problem as an optimization one and show that higher service availability can be achieved by proper RPL (Ring Protection Link) placement and ring hierarchy selection with an objective of maximizing the network flow under any dual-link failure. Our design achieves minimal capacity allocation that minimizes the number of service outages therefore achieving higher service availability. Numerical evaluations as well as comparisons are carried out which show the effectiveness (in terms allocated capacity and service outages) of the presented design approach.

Bio:

Dr. Samir Sebbah is a Senior R&D Scientist with Oracle America, where he has been working on hybrid optimization technologies using constraint programming and operations research since he joined the company in August 2012. He is also an adjunct assistant professor with CISSE Concordia University. During Oct 2010-August 2012 he held an NSERC Visiting Fellowship at Defence R&D Canada in Ottawa. He received a M.Sc. degree from the University of Paris 8 in computer Science and Operation Research and a Ph.D. degree in Electrical and Computer Engineering from the University of Concordia University in 2010. His research interests are in networking, focusing on design of large-scale telecommunications systems. In the networking arena, he has worked on design of survivable optical networks and wireless networks. He has explored the impact of failures in wavelength division multiplexing networks, and investigated large-scale algorithms to support multiple classes of recovery and protection. He has also investigated the trade-off between the service availability and the capital/operating costs in survivable networks. As
adjunct assistant professor with CIISE Dept., Concordia University, Dr. Sebbah is involved in projects on design and optimization of Ethernet Ring Protection Networks, Wireless Networks, and Virtual Local Area Networks.

Dr. Sebbah is the co-author of over thirty papers on networking and computer systems. He is an active reviewer for IEEE/ACM Transactions on Networking, Computer Communications, and IEEE Transactions on Networking. He was the Program co-Chair for the 2010 INFORMS Conference on Telecommunications (Montreal 2010). His paper (with Dr. Brigitte Jaumard) "A resilient transparent optical network design with a preconfigured extended-tree scheme" received the best paper award of the 2009 IEEE International Conference on Communications Conference in optical networking.

____________________________________________

 

Seminar by Leonard Kleinrock (Distinguished Professor of Computer Science at UCLA)

 

THE INTERNET AND BEYOND

Tuesday, June 11, 2013, 10:15 a.m. - 11:15 a.m., Room EV1.605

Poster

Abstract:

Leonard Kleinrock presents the early history of the science and infrastructure that emerged as the ARPANET, as well as the trajectory of development it set for the broader construct that we now call the Internet. The author offers a personal and autobiographical element, comments on its current structure, and looks into its possible futures.

Bio:

Leonard Kleinrock developed the mathematical theory of packet networks, the technology underpinning the Internet, while a graduate student at MIT. This was in the period 1960-1962, nearly a decade before the birth of the Internet which occurred in his laboratory when his Host computer at UCLA became the first node of the Internet in September 1969. He wrote the first paper and published the first book on the subject; he also directed the transmission of the first message ever to pass over the Internet. He was listed by the Los Angeles Times in 1999 as among the "50 People Who Most Influenced Business This Century." He was also listed as among the 33 most influential living Americans in the December 2006 Atlantic Monthly. Kleinrock's work was further recognized when he received the 2007 National Medal of Science, the highest honor for achievement in science bestowed by the President of the United States.


____________________________________________

 

Seminar by Dr. Qiang Ye (University of Prince Edward Island)

 

STCDG: AN EFFICIENT DATA GATHERING ALGORITHM FOR WIRELESS SENSOR NETWORKS

Friday, May 31st, 10:00am, EV3.309

 

Abstract:

Data gathering is one of the most important issues in wireless sensor
networks (WSNs). With traditional data gathering approaches, the sink node
receives one data packet from each sensor node in a typical data collection
scenario, which leads to a large amount of traffic. As sensor nodes are
often battery-powered, the intensity of data traffic has a serious impact
on the lifespan of WSNs. If the amount of traffic can be reduced, the
lifespan of WSNs will be seriously prolonged. In this talk, we propose an
innovative data gathering scheme based on matrix completion,
Spatio-Temporal Compressive Data Collection (STCDG), which could
significantly increase the lifespan of WSNs in this manner. Technically,
STCDG makes use of both the low-rank and short-term stability features to
reduce the amount of traffic and improve the level of recovery accuracy.
Our experimental results indicate that STCDG outperforms the
state-of-the-art data gathering algorithms in terms of recovery error,
power consumption, lifespan, and network capacity.

Bio:

Dr. Qiang Ye is an Associate Professor in the Dept. of Computer Science and
Information Technology at the University of Prince Edward Island, Canada.
His current research interests lie in the area of communication networks in
general. Specifically, he is interested in Wireless Ad Hoc/Sensor Networks,
Network Reliability and Security (Wireline and Wireless), and Protocol
Modeling/Evaluation. He received a Ph.D. in Computing Science from the
University of Alberta in 2007. His M. Engr. and B. Engr. in Computer
Science and Technology are from Harbin Institute of Technology, P.R. China.
He is a Member of IEEE and ACM.

____________________________________________

 

Seminar by Dr. Ventzeslav Valev (Institute of Mathematics and Informatics, Bulgarian Academy of Sciences)

 

FROM FEATURES TO PREDICATES OR FROM SYMPTOMS TO SYNDROMES IN SUPERVISED PATTERN RECOGNITION

Wednesday, May 15th, 11:00am, EV3.309

 

Abstract:

The present talk summarizes the recent advances in the discovery of empirical regularities by solving supervised pattern recognition problem when binary features are used in pattern descriptions. A typical example with binary features would be a medical diagnosis based on the presence or absence of a number of symptoms. Mathematical models used are based on learning Boolean formulas. Boolean formulas are expressed as conjunctions and are called non-reducible descriptors. They correspond to syndromes in medical diagnosis. A combinatorial procedure for construction of non-reducible syndromes is given. Non-reducible syndromes are extended as generalized non-reducible syndromes. Decision rules and feature selection problem are discussed. This approach is illustrated with applications for recognition of Arabic numerals in different graphical representations and for recognition of QRS complexes in electrocardiograms.

Bio:

Prof. Ventzeslav Valev, PhD, Dr. of Math. Sci. He obtained M.Sc. Degree from the Wroclaw University of Technology, Wroclaw, Poland, in Computer Science, and M.Sc. Degree from the University of Wroclaw, Wroclaw, Poland, in Mathematics. He obtained Ph.D. Degree in Computer Science from the Dorodnicyn Computing Centre of the Russian Academy of Sciences in Moscow in 1979 and Doctor of Mathematical Science Degree in the field of Mathematical Informatics from the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences, Sofia, Bulgaria, in 1995, where he was elected Full Professor in 2002. In 2010 Dr. Valev was elected Associated Member of the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences.

Dr. Valev had appointments at the University of Iowa, at the University of Medicine and Dentistry of New Jersey, and at Saint Louis University. Dr. Valev had also appointments in Germany, Turkey, Cyprus, Poland, Bulgaria, Saudi Arabia, and Oman. Dr. Valev is author of more than 50 papers published in Pattern Recognition, Pattern Recognition Letters, International Journal on Machine Graphics & Vision, Critical Reviews in Biomedical Engineering, Lecture Notes in Computer Science (LNCS), and in proceedings of many international conferences. Since 1998 Dr. Valev is a Fellow of the International Association for Pattern Recognition (IAPR).

____________________________________________

 

Seminar by Dr. Robert E. Tarjan (Princeton University)

 

MYSTERIES OF SEARCH TREES

May 14, 2013, 11:00am, H 767

 

Abstract:

The search is one of the most basic and most important data structures in computer science. It lies behind all modern database systems and has many other applications. Although the history of this data structure extends back more than fifty years, we still do not know everything about it. This talk will explore new ideas that lead both to simpler kinds of search trees and to a better analysis of their efficiency.

Bio:

Robert E. Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University and a Visiting Researcher at Microsoft Research. He is an expert in the design and analysis of data structures and graph algorithms. A member of the U.S. National Academy of Sciences and of the U.S. National Academy of Engineering, he was awarded the Nevanlinna Prize in 1982, and, with John Hopcroft, the Turing Award in 1986.

____________________________________________

 

Seminar by Dr. Ehsan Chiniforooshan (Google)

 

NONDETERMINISM IN THE ABSTRACT TILE ASSEMBLY MODEL

Monday, May 13 , 2013, 10:00am, EV 3.309

 

Abstract:

Researchers have shown that self-assembly of tile-like DNA structures can be used for nanoscale computations. The abstract Tile Assembly Model (aTAM), proposed by Winfree in 1998, is a simple mathematical abstraction of DNA tile self-assembly. The aTAM is extensively studied in the past 15 years. In this talk, I will give a brief overview of complexity results for the aTAM, show how allowing nondeterminism can increase the power of the aTAM even in self-assembling a shape deterministically, and finally discuss a number of open questions.

Bio:

Ehsan Chiniforooshan received his M.Sc. from Sharif University of Technology, advised by Rouzbeh Tusserkani, and his Ph.D. from the University of Waterloo under the supervision of Naomi Nishimura. He has worked on problems in Combinatorics, Graph Theory, Data Structures, and Self-Assembly, and is currently a Software Engineer at Google.

____________________________________________

 

 

Seminar by Dr. Lev B. Levitin (Boston University)

 

FUNDAMENTAL PHYSICAL CAPABILITIES AND LIMITATIONS IN COMMUNICATION AND COMPUTING

Tuesday, March 12, 2013, 11:00 a.m., EV 2.260

 

Abstract:

This talk is a review of fifty years of research focused on revealing the ultimate capabilities of physical systems, on one hand, and their fundamental limitations, on the other, in communication and computing. The following topics are considered.
1. Limits on information transmission by physical agents. Capacity and energy efficiency of photon and corpuscular channels. General bound on minimum energy per information unit.
2. The effect of irreversibility of quantum measurements. Entropy defect and “accessible” information.
3. POVM vs. von Neumann measurements in finite- and infinite-dimensional Hilbert spaces.
4. The maximum speed of computing operations. The Mandelstam-Tamm and Margolus-Levitin bounds. The minimum operation time of quantum gates. The unified tight bound on the rate of computation.
5. Thermodynamic cost of reversible computing. The minimum energy dissipation per computational step.
6. Equivalence relation between information and work. Heat-to-work conversion by use of one-particle and two-particle information.

Bio:

Dr. Lev B. Levitin received the M.S. degree in physics from Moscow University, Moscow, USSR, in 1960 and the Ph.D. degree in physical and mathematical sciences from the USSR Academy of Sciences in 1969. Since 1982, he has been with the College of Engineering, Boston University, and since 1986 has been Distinguished Professor of Engineering Science with the Department of Electrical and Computer Engineering at Boston University. He has published over 190 papers, presentations, and patents. His research areas include information theory; quantum communication systems; physics of computation; quantum computing; quantum theory of measurements, mathematical linguistics; theory of complex systems; coding theory; theory of computer hardware testing, reliable computer networks, and bioinformatics. He is a Life Fellow of IEEE, a member of the International Academy of Informatics and other professional societies.

____________________________________________

 

Seminar by Dr. Thomas Triplet (Centre for Structural and Functional Genomics, Concordia University)

 

TOWARDS PERSONALIZED MEDICINE

Friday, March 8, 2013, 10:00 a.m., EV 2.184

 

Abstract:

The mapping of the human genome, completed in 2003 after 13 years of collective efforts at an estimated cost of 3 billions dollars, had an immense impact on biomedical research. Earlier this year, Life Technologies presented a small device to sequence an entire human genome in a day for less than $1,000, effectively making personal genomics accessible to most laboratories.

Current databases are typically designed as single organism databases and are not readily amenable to complex system-wide research across multiple species. In this talk, I will present the unique challenges of clinical and biological big data, and review the state-of-the-art in genomics data warehousing.

I will also present versatile classification integration and reclassification methods that can combine existing classifications without requiring access to the raw data, and will discuss how they can be leveraged to combine clinical data with omics databases: more accurate predictors for diseases risks and pathologies, integrated with personal omics data, could ultimately lead to early diagnostics and personalized drugs to treat patients given their personal genetic background.

Bio:

Dr. Thomas Triplet is a postdoctoral researcher at the Centre for Structural and Functional Genomics and the Department of Computer Science and Software Engineering at Concordia University. He is also a member of the professional Ordre des Ingénieurs du Québec. He earned his engineering diploma and Master's degree in Computer Science and Engineering, with distinctions, in 2007 at the French National Graduate School of Engineering ENSICAEN. He completed his Ph.D. in bioinformatics after two years under the supervision of Prof. Peter Revesz at the University of Nebraska-Lincoln, USA, where he was a recipient of an ISEP and a Milton E. Mohr fellowships. His main research interests include the integration and mining of clinical and biological big data for personalized medicine, as well as the visualization and the automated analysis of those data using machine learning.

____________________________________________

 

Seminar by Dr. John Plaice (The University of New South Wales)

 

HIGHER-ORDER MULTIDIMENSIONAL PROGRAMMING

Tuesday, March 5, 2013, 11:00 a.m, EV 3.309

 

Abstract:

In 1975, William W. Wadge and Edward A. Ashcroft introduced the language Lucid, in which the value of a variable was a stream. The successors to Lucid took two paths.

The first path, taken by Lustre, was to restrict the language so that a stream could be provided with a timed semantics, where the i-th element of a stream appeared with the i-th tick of the stream's clock, itself a Boolean stream. Today, Lustre is at the core of the Scade software suite, the reference tool for avionics worldwide.

The second path was to generalize the language to include multidimensional streams and higher-order functions. The latest language along this path is TransLucid, a higher-order functional language in which variables define arbitrary-dimensional arrays, where any atomic value may be used as a dimension, and a multidimensional runtime context is used to index the variables.

The presentation will focus on the key problems pertaining to design, semantics and implementation of Lustre and TransLucid, and show how the two paths are being brought back together in the TransLucid project.

Bio:

Dr. John Plaice (BMath 1979, University of Waterloo, Canada; PhD 1984, Grenoble Institute of Technology, France; Habilitation 2010, University of Grenoble) is Adjunct Associate Professor at The University of New South Wales, Sydney, Australia.
He wrote the first semantics and compiler for Lustre (Synchronous Real-Time Lucid), the core real-time programming language in Esterel Technologies' Scade Suite, the leading solution in Europe for developing embedded software meeting stringent avionics standards. Since then, he has developed numerous techniques for adaptation to context, in programming languages, software configuration, electronic documents and digital typography.

________________________________________________

 

Seminar by Dr. Tiberiu Popa (Computer Graphics Lab, ETH Zurich)

 

THE WORLD AT YOUR GEOMETRICKS

Friday, March 1, 2013, 10:00 a.m, EV 11.119

 

Abstract:

Digital geometry processing is a powerful tool used ubiquitously in increasingly many aspects of the digital world from games, movies to engineering, CAD, medicine and telepresence. It is also a relatively new and emerging field that in the last decade has developed a large set of new algorithms and techniques that are increasingly used in mainstream applications becoming more and more established in the digital world.

In this talk I will present some of my work in digital geometry processing with applications in modeling, deformation, novel view synthesis, telepresence and teleconferencing. I will show how pure geometric algorithms can be used to solve complex problems and I hope I will convince you of the relevance of digital geometry processing in todays digital world and hopefully will inspire new students to use and study some of these techniques.

Bio:

Dr. Tiberiu Popa is a postdoctoral researcher in the Computer Graphics Lab at ETH Zurich. He completed his Bachelor of Mathematics in 2001 and Master of Mathematics in 2004, both at the University of Waterloo in Canada. In 2010, Tiberiu obtained a PhD from the University of British Columbia in Canada that received the Alain Fournier annual thesis award, and then started at ETH in January 2010. Since 2011 he is coordinating the research efforts of the BeingThere center Zurich, a research collaboration between ETH University of North Carolina and Nanyang University of Singapore on next generation Telepresence systems. Tiberiu’s main research interests are in digital geometry processing, spatial-temporal surface acquisition, free viewpoint video, Telepresence, etc.

________________________________________________

 

Seminar by Dr. Aiman Hanna (Concordia University)

 

A HYBRID FRAMEWORK FOR THE SYSTEMATIC DETECTION OF SOFTWARE SECURITY VULNERABILITIES IN SOURCE CODE

Tuesday, February 12, 2013, 15:00, EV 3.309

 

Abstract:

In this talk, we address the problem of detecting vulnerabilities in software where the source code is available, such as free-and-open-source software. In this, we rely on the use of security testing conducting various analyses. Either static or dynamic analysis can be used for security testing approaches, yet both analyses have their advantages and drawbacks. In fact, while these analyses are different, they are complementary to each other in many ways. Consequently, approaches that would combine these analyses have the potential of becoming very advantageous to security testing and vulnerability detection. This has motivated the research work discussed in this talk.

For the purpose of security testing, security analysts need to specify the security properties that they wish to test software against for security violations. Accordingly, a security model extending security automata is introduced to allow such specifications. For the purpose of profiling the software behavior at run-time, various code instrumentations are needed at different program points. We hence explore this subject and introduce a compiler-assisted profiler that is based on the pointcut model of Aspect-Oriented Programming (AOP) languages. Thirdly, we explore the potentiality of static analysis for vulnerability detection and illustrate its applicability and limitations with an additional focus on reachability analysis.

Finally, we introduce a more comprehensive security testing and test-data generation framework that provides further advantages over the mere static-analysis model. The framework combines the power of static and dynamic analyses, and is used to generate concrete data, with which the existence of vulnerability is proven beyond doubt, hence mitigating major drawbacks of static analysis, namely false positives. We further illustrate the feasibility of the elaborated frameworks by developing case studies for test-data generation and vulnerability detection on various size and complexity software.

Bio:

Dr. Aiman Hanna received his Bachelor in Engineering from Assuit University, Egypt in 1988, Master’s in Computer Science and Ph.D in Computer Science from Concordia University, Canada in 2000 and 2012. He worked as a Senior Software Engineer and Team Leader for more than eight consecutive years for some of the largest firms in Canada (BCE and CGI). He is currently a full-time professor at Concordia University where he has been working for nearly 22 years. His research interests include software security, secure software engineering, vulnerability detection, software security hardening, formal automatic specification, language technologies, formal semantics, and code analysis techniques. For his research work, Dr. Hanna was the recipient of the 2009 OCTAS Award from the Fédération de l'Informatique du Québec (FIQ). He has also been the recipient of the Faculty of Engineering and Computer Science Teaching Excellence Award in 1999, and Concordia University CCSL Teaching Excellence Award in 2001. Dr. Hanna holds a Professional Engineering License and is a member of Professional Engineers Ontario (PEO).

________________________________________________

 

Seminar by Dr. Andrew King (Simon Fraser University)

 

MATCHINGS, PERFECT MATCHINGS, AND THE LOVASZ-PLUMMER CONJECTURE

Friday, February 8, 2013, 10:00 a.m., EV 3.309

 

Abstract:

A matching in a graph is simply a set of edges, no two of which share an endpoint. Matchings are fundamental not only to graph theory, but to computer science in general, as they can be used to model a broad class of problems in which we must pair up certain objects according to a set of basic constraints. In this talk I will discuss perfect matchings in bipartite graphs -- it is immediately clear how such objects model problems in which we must match objects in one set to objects in another set, with nothing excluded, for example when we need to match network traffic requests to servers. But these perfect matchings also have less obvious applications in areas such as complexity theory and mathematical chemistry.

In the 1970s, Lovász and Plummer conjectured that the number of perfect matchings in a bridgeless cubic graph is exponential compared to its size. This was proven by Voorhoeve for bipartite graphs and by Chudnovsky and Seymour for planar graphs. I will present an outright proof of the conjecture that uses elements of both earlier proofs, as well as properties of the perfect matching polytope.

This is joint work with Louis Esperet, Frantisek Kardos, Daniel Kral, and Sergey Norin.

Bio:

Andrew King is a PIMS Postdoctoral Fellow working with Pavol Hell and Bojan Mohar at Simon Fraser University. He received his Ph.D. from the School of Computer Science at McGill University under the supervision of Bruce Reed, writing his dissertation on the subject of colouring and decomposing claw-free graphs. Following this he spent two years as an NSERC Postdoctoral Fellow with Maria Chudnovsky at Columbia University's Industrial Engineering and Operations Research Department. His main research interests include graph algorithms, bounding the chromatic number, graph clustering, and structural decomposition.

________________________________________________

 

Lecture by Dr. Burak Kantarci (Post Doctoral Fellow, University of Ottawa)

 

ENERGY EFFICIENT CLOUD NETWORKING: STATE OF THE ART, OPPORTUNITIES AND CHALLENGES

Friday, January 25, 2013, 10:00 a.m., EV 11.119

 

Abstract:

Cloud computing is a newly emerging paradigm that allows ubiquitous provisioning of software, platform and infrastructure services and enables offloading the local resources. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. One of the major challenges in cloud computing is energy efficiency. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. In this talk, I will present the existing solutions, opportunities and challenges in the design of the Internet backbone with data centers and energy-efficient delivery of the cloud services. A case study will follow by introducing Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics to guarantee either minimum delayed or maximum power saving cloud services. Besides, I will extend the scope of my talk by tackling network-aware intra and inter-data center virtual machine placement with commitment to energy-efficiency. Furthermore, in conjunction with the advantages of the smart grid, I will also introduce the recent research results on the impact of Time of Use (ToU)-aware provisioning on the Opex of the network and data center operators. Opportunities and research challenges in this area including Wireless Sensor Network-based thermal monitoring of data centers, as well as the security and privacy issues will conclude the presentation as a part of the immediate research agenda.

Bio:

Burak Kantarci is a postdoctoral fellow at the School of Electrical Engineering and Computer Science of the University of Ottawa. His research at UOttawa is being supervised by Prof Hussein Mouftah who also co-supervised his PhD thesis. Dr. Kantarci received the M.Sc. and Ph.D. degrees in Computer Engineering from Istanbul Technical University in 2005 and 2009, respectively, and he completed the major content of his PhD thesis at the University of Ottawa between 2007 and 2008. He was the recipient of the Siemens Excellence Award in 2005 for his contributions to the optical burst switching research. He has co-authored seventeen articles in established journals and forty seven papers in many flagship conferences, and he has contributed to five book chapters. He is a co-editor of the forthcoming book, Communication Infrastructures for Cloud Computing, which is to be published by IGI Global in 2013. He has been serving in the TPCs of Green Communication Systems Track of IEEE GLOBECOM and IEEE ICC conferences. Dr. Kantarci is a Senior Member of the IEEE, and a founding member of the IEEE ComSoc-Technical Sub-committee on Green Communications and Computing.

________________________________________________

 

Lecture by Dr. Eric Charton (Post Doctoral Fellow, Centre de Recherche Informatique de Montreal)

 

NAMED ENTITIES DETECTION AND ENTITY LINKING IN THE CONTEXT OF SEMANTIC WEB

Friday, December 7, 2012, 11:00 a.m., EV 3.309

 

Abstract:

Entity linking consists in establishing the relation between a textual entity from a text and its corresponding entity in an ontology. The main difficulty of this task is that a textual entity might be highly polysemic and potentially related to many different ontological representations. To solve this specific problem, various information retrieval techniques can be used. Most of those involve contextual words to estimate which exact textual entity have to be recognized. In this communication, we will explore the question of entity linking and the disambiguation problems it involves. We will describe how a detection and disambiguation resource built from Wikipedia encyclopaedic corpus can be used to establish a link between a named entity (NE) in a text and its normalized ontological representation from the semantic web.

Bio:

Eric Charton (Ph.D. , M.Sc) is a researcher in the field of machine learning and natural language processing and their application to the semantic web. He had worked in various university labs in France (Laboratoire Informatique d'Avignon) and Québec (École Polytechnique de Montréal). His research work has been experimented in scientific evaluation campaigns like CoNLL or Ester and is publicly released in the NLGbAse ontology (www.nlgbase.org), and the Wikimeta Semantic Labeling tool (www.wikimeta.com).
Eric Charton is also the author of wide audience books related to Computer Science and Technology published by Pearson and Simon and Shuster Macmillan. He works currently as researcher at the Centre de Recherche Informatique de Montreal (CRIM) on a project related to the improvement of a search engine using semantic web techniques.

________________________________________________

 

Lecture by Josep Lluis Larriba Pey (Universitat Politècnica de Catalunya, Barcelona)



MANAGING GRAPH PROBLEMS EFFICIENTLY

Friday, November 9, 2012, 14:00, EV 3.309

 

Abstract:

The problem of graph management is very important nowadays. In this talk we introduce the work of DAMA-UPC in graph management, some concepts about graph management and propose a technology for managing large graphs in an efficient way. The technology presented, DEX, has evolved towards a software commercialized and evolved by Sparsity Technologies, www.sparsity-technologies.com, a spin out of UPC. In the talk we show results for the technology compared to other technologies that are capable of solving the same graph problems, showing better performance and scalability for DEX for large graphs in single processor hardware.

Bio:

Josep Lluis Larriba Pey is the director of DAMA UPC Barcelona, Spain, and his interests include performance, exploration and quality in data management, focusing particularly on large data volumes.

________________________________________________

 

Lecture by Dr. Ming Ouyang (Department of Computer Engineering & Computer Science, University of Louisville)



DEVELOPMENTS IN GENERAL PURPOSE GPU COMPUTING

Monday, November 5, 2012, 15:00, EV 3.309

 

Abstract:

Graphics processing units (GPUs) on commodity video cards were originally designed towards the needs of the 3-D gaming industry for high performance, real-time graphics. They have become powerful co-processors to the CPUs. The top of the line Nvidia GPUs for computation have 512 cores in one chip. Scientists and engineers from many disciplines are exploring various ways to use this massive amount of parallel computation. This presentation gives an introduction of GPU hardware and programming, and a survey of some applications.

Bio:

Dr. Ming Ouyang has a B.S. degree in Computer Science from National Taiwan University, an M.S. degree in Computer Science from Stony Brook University, and a Ph.D. degree in Computer Science from Rutgers University, under the supervision of Dr. Vasek Chvatal. He joined the Computer Engineering and Computer Science Department of University of Louisville as an Assistant Professor in 2007.

 

_____________________________________________

 

Lecture by Professor Diane L. Souvaine (Department of Computer Science, Tufts University)



EXPLORATIONS IN GEOMETRIC RECONFIGURATIONS

Thursday July 19, 3:00-4:00PM, EV 3.309

 

Abstract:

Computational geometry is a field that deals with algorithmic aspects
of geometric problems.  Geometric problems pervade a broad spectrum of
disciplines, with cartography, computer vision, wireless communications,
robotics, and computer-aided design and manufacturing representing but
a few.  In computational geometry, we study geometric problems at various
levels of abstraction from the real-life applied problems from which they
may be drawn: sometimes, we may work to provide practical solutions that
are as efficient and as accurate as possible for immediate use;  other
times, we may work to establish clear bounds on the complexity of more
theoretical abstract questions whose practical applicability is not yet
apparent.  And yet the theoretical tools that we develop today may affect
the practice of tomorrow.

This talk will focus on a particular branch of computational geometry,
that of geometric reconfigurations.  Geometric reconfigurations abound
in numerous applications from nano-selfassembly to movement of robot
arms to solving rubik's cubes or other mathematical puzzles. This talk
will explore some different types of geometric reconfiguration as well
as underlying computational techniques used to solve these problems.
No specific background is assumed.

Professor Souvaine's CV is available here.

________________________________________________

 

Lecture by Professor Faramarz F. Samavati, PhD (University of Calgary):


SKETCH-BASED MODELING FOR DETAILED 3D SHAPES

Wednesday, July 11, 2012, 2:00PM, EV 3.309

 

Abstract:

Many interesting sketch-based modeling techniques have been developed over the recent years. However, these techniques are mostly suitable for creating simple and usually low quality shapes. To address this shortcoming, two research projects have been explored in my group. In this talk, I will present these sketch-based projects ( NaturaSketch and PUPs) for modeling detailed 3D shapes.  NaturaSketch is an image assisted sketch-based system for creating and deforming subdivision and multiresolution surfaces.   PUPs (Partition of Unity Parametrics) is a natural extension of NURBS that allows us to support high-quality sketched features.

Bio:

Faramarz F. Samavati is a Professor and Associate Head (Graduate Director) of the Department of Computer Science at the University of Calgary.  His research interests include Computer Graphics, Visualization and 3D imaging. Dr. Samavati has published more than 90 papers, one book and filed 2 patents.  Currently, he is an Associate Editor of Computer & Graphics (Elsevier's journal) and a Network (principal) Investigator of GRAND NCE (Networks of Centres of Excellence of Canada in Graphics, Animation and New Media) in where he is also the Lead of SKETCH project.

 

________________________________________________

 

 

 

 

 

________________________________________________

Lecture by Dr. Stephann Makri (University College
London Interaction Centre) :

 

COMING ACROSS INFORMATION SERENDIPITOUSLY: AN EMPIRICAL STUDY

Tuesday May 1, 2012, 11am-12noon, EV3.309

 

Abstract:
We wanted to gain a detailed empirical understanding of how
researchers come across information serendipitously, grounded in
real-world examples. To gain this understanding, we asked 28
researchers from a broad cross-section of disciplines to discuss in
detail memorable examples of coming across information serendipitously
from their research or everyday life. We found that although the
examples provided were varied, they shared common elements
(specifically, they involved a mix of unexpectedness and insight and
led to a valuable, unanticipated outcome). These elements form the
core of 1) a descriptive model of serendipity and 2) a framework for
subjectively classifying whether or not a particular experience might
be considered serendipitous and, if so, how serendipitous. In this
talk, we discuss this model and framework and the implications of our
findings on the design of interactive systems.

Bio:
Dr. Stephann Makri is a Research Associate at University College
London Interaction Centre and is conducting research as part of a
£1.87m UK Research Council funded project (SerenA: Chance Encounters
in the Space of Ideas) which aims to understand how people come across
information 'serendipitously' and to design ubiquitous computing
systems based on this understanding.

 

________________________________________________

 

 

________________________________________________

 

Lecture by Dr. Therapon Skotiniotis (Amazon) :

 

MODULAR ADAPTIVE PROGRAMMING

Tuesday February 14, 2012, 10:00am, EV3.309

 

Abstract:

Adaptive Programming (AP) provides advanced code modularization for traversal related concerns in object-oriented programs. Computation in AP programs consists of (i) a graph-based model of a program’s class hierarchy, (ii) a navigation specification, called a strategy, and (iii) a visitor class with specialized methods executed before and after traversing objects. Despite the benefits of AP there are also limitations; hardcoded name dependencies between strategies and the class hierarchy as well as non-modular adaptive code (strategies and visitors). These limitations hamper adaptive code reuse and make composition and extension of adaptive code difficult.

To address these limitations we define "What You See Is What You Get" (WYSIWYG) strategies, constraints and Demeter Interfaces. WYSIWYG strategies guarantee the order of strategy nodes in selected paths simplifying the semantics of strategies and leading to more predictable behavior. Constraints provide a new mechanism that allows programmers to define invariants on the graph-based model of a program’s hierarchy thereby making programmer’s assumptions explicit and verifiable at compile time. Finally, Demeter Interfaces provide (i) an interface between the program’s class hierarchy and both strategies and visitors, (ii) statically verifiable constraints on the structure of a class hierarchy that implements a Demeter interface and (iii) the ability to parametrize adaptive code.

We further show that our results can be applied to other technologies that share similar properties --traversals of graph like structures using selector languages-- as Adaptive Programs, such as, XML processing, and discuss new future directions made possible because of the advantages introduced by Demeter Interfaces to AP.

Bio:

After receiving a B.Sc. in Joint Mathematics and Computer Science at Imperial College, I joined the Illinois Institute of Technology (IIT) were I started on a PhD in Computer Science under the supervision of Dr. Morris Chang with a focus on memory management systems for the JVM. In 2001, after receiving my masters from IIT, I moved to Northeastern University were I completed my PhD in Computer Science under the supervision of Dr. Karl Lieberherr with a focus in Software Engineering and Programming Languages. For the past three years I have been working as a Software Developer Engineer at Amazon were I have been involved in the design, implementation, and, maintenance of an internally developed web framework and web services used by multiple teams to develop web sites and distributed business applications.

 

________________________________________________

 

Lecture by Dr. Peter Rigby (McGill University) :

 

EMPIRICAL SOFTWARE ENGINEERING: A CASE STUDY OF PEER REVIEW IN OPEN SOURCE SOFTWARE

Thursday February 9, 2012, 10:00am, EV3.309

 

Abstract:

How do we know which Software Engineering (SE) practices produce the highest quality software systems on time and on budget?

In answer to this question, I will discuss my Empirical Software Engineering research program. This area of work interests me because validated empirical evidence on software practices allows us to understand which are successful and in which context they can be transferred to other projects. Adoption of evidence based SE practices increases the likelihood that a software product will be of high-quality, as well as on time and on budget. To generate validated hypothesis and more general theories, empirical SE requires the collection of evidence from multiple sources and the use of a variety of methods (triangulation). I have shown the fruitfulness of this approach in a number of empirical SE studies. For example, I examined the effects of distributed version control on developer organization and system architecture. In my dissertation, I performed a systematic and comparative evaluation of open source software peer review practices.

Bio:

Peter C. Rigby is a postdoctoral researcher working with Dr. Robillard at McGill University in Montreal. He received his PhD from the University of Victoria for his examination of the peer review practices used by OSS projects. His PhD advisers were Dr. Storey and Dr. German. He received a Bachelor degree at the University of Ottawa in Software Engineering and has taught two third year Software Engineering courses (Software Maintenance and HCI). His primary research interest is in Mining Empirical Software Engineering data to understand the how people collaborate to design and develop large, successful software systems. His three current research areas are: informal API documentation quality, lightweight industrial review techniques, and the affect distributed version control is having on developer collaboration. Please see helium.cs.uvic.ca for more details.

 

________________________________________________

 

Lecture by Sofian Audry :

 

AN EMBODIED MACHINE LEARNING AGENT

Wednesday, February 8, 2012, 13:00, EV 11.705 (Hexagram-Concordia Research-Creation Brown Bag Series)

 

Abstract:

This presentation focuses on an experimental project featuring a minimalist embodied agent embedded in real life. The agent adapts to its environment through a single perceptual modality by relying on a machine learning approach. The goal of this project is to start experimenting with interactive learning agents as ways of creating meaningful aesthetic experiences. By appealing to different concepts in cultural studies, science and technology studies, cognitive science, phenomenology and performativity theory, I build the argument that the embodied interaction of the agent with its world becomes the site of an aesthetic experience and the production of meaning. Furthermore, I show how its connectionist structure and its learning behavior augment the world by extending it with a brainlike phenomena that couples with it.

Hexagram-Concordia: Research-Creation Brown Bag Series

Graduate students in all disciplines are invited to present their practice and/or research and engage with active peers on topics surrounding research-creation, ontological perspectives on art, and how artistic practices create knowledge, among others. This series of student-organized talks, seminars, and roundtable discussions has been initiated in an effort to strengthen graduate student participation in Hexagram as a platform for furthering exchange and collaboration. A regular calendar of talks through Winter 2012 is currently being organized. Please look out for more announcements related to this series in the coming months.

For more information contact the organizers via Harry Smoak.

 

________________________________________________

 

Lecture by Dr. Nikolaos Tsantalis (University of Alberta) :

 

AUTOMATED DETECTION OF DESIGN PATTERNS AND REFACTORING OPPORTUNITIES IN OBJECT ORIENTED SYSTEMS

Tuesday February 7, 2012, 10:00am, EV3.309

 

Abstract:

In this talk I will cover the results of my research work on the detection of design pattern instances and the identification of refactoring opportunities in object-oriented systems. The knowledge of the design pattern instances implemented in a software system provides a better understanding of its overall architecture and the design decisions made during its evolution, facilitates its extension to new requirements through pattern extension mechanisms and improves the communication among its developers through a common vocabulary of design concepts. However, finding the implemented pattern instances in a software system is not a trivial task, since they are usually not documented, they do not follow the standard naming conventions, their implementation may deviate from their standard description and their manual detection is prohibitive for large systems. To overcome all these difficulties, we have proposed a technique for the structural detection of design pattern instances that is based on a graph similarity algorithm. The proposed technique is scalable to large systems, robust to pattern deviations, highly accurate and easily extensible to new pattern definitions. According to several studies, maintenance occupies the largest percentage (even 90%) of the total software development cost. This is due to the fact that a software product should constantly evolve by providing new features, bug fixes, performance improvements, and integration of novel technologies in order to remain competitive and diachronically successful.

Despite the major importance of software maintenance, the resources invested by software companies on preventive maintenance (i.e., maintenance aiming to improve maintainability and avoid future design problems) are very limited (lower than 5% of the total maintenance cost), since the manual and human-driven inspection of source code requires tremendous effort and leads to long-term benefits that do not add immediate value to the software product. As a result, there is a clear need for supporting and automating the preventive maintenance process with tools. To this end, we have developed techniques that resolve major design problems by identifying and suggesting appropriate refactoring opportunities. This refactoring-oriented approach provides a complete solution for preventive maintenance (in contrast to existing approaches focusing only on the detection of design problems) by covering all distinctactivities of the refactoring process. This includes the application of the suggested refactoring solutions in a way that preserves program behavior and a ranking mechanism based on their impact on design quality allowing the prioritization of maintenance effort on parts of the program that would benefit the most.

Bio:

Nikolaos Tsantalis received his BS, MS and PhD degrees in applied informatics from the University of Macedonia, Greece, in 2004, 2006 and 2010, respectively. He is currently a Postdoctoral Fellow at the Department of Computing Science, University of Alberta, Canada. His research interests include design pattern detection, identification of refactoring opportunities, and design evolution analysis. He has developed tools, such as the Design Pattern Detection tool and JDeodorant, which have been widely acknowledged by the software maintenance community. He is a member of the IEEE and the IEEE Computer Society.

 

________________________________________________

 

Lecture by Dr. Jelena Trajkovic (Center for Embedded Computer Systems at UC Irvine) :

 

AUTOMATIC DESIGN AND OPTIMIZATION OF EMBEDDED COMPUTING PLATFORMS

Tuesday January 31, 2012, 10:00am, EV3.309

 

Abstract:

The stringent performance constraints on application software continue to grow, particularly in the embedded and mobile computing domains. Conventional processors and on-chip communication architectures do not provide the necessary throughput for meeting these constraints. There is a widely acknowledged need for Application-Specific Processors (ASPs), as well as fast and scalable on-chip communication technology to meet the performance needs of next generation embedded applications. In this talk, I will present my work on two distinct, yet complementary, design technologies for embedded computing platforms. The first design technology is automatic generation of ASPs from software. I will discuss fast, scalable and controllable algorithms to automatically create a pipelined ASP core from a given application C code. The performance and resource usage of the generated ASP is comparable to manual hardware design, thereby providing much faster computing than a general purpose processor, while still being programmable. The second design technology is Optical Network-on-Chip (ONoC), which provides significantly higher communication bandwitch than conventional communication architectures in multi-core systems. I will present algorithms for optimizing the mapping of communication channels to optical waveguides, which is a key problem in ONoC-based design. Finally, I will discuss my future research directions that aim to build a comprehensive framework for design and optimization of ASP and ONoC based embedded computing platforms.

Bio:

Jelena Trajkovic is a ReSMiQ post-doctoral scholar at Ecole Polytechnique de Montreal and is affiliated with the Center for Embedded Computer Systems at UC Irvine. She received her PhD from the University of California, Irvine in 2009. She holds an M.S. in Information and Computer Science, from the University of California, Irvine (2003), and a Dipl. Ing. degree in electrical engineering, from the School of Electrical Engineering at University of Belgrade,Serbia (2000). Her research interests include novel architectures and design automation methods for embedded systems, as well as design and modeling of optical networks-on-chip for many-core platforms.

 

________________________________________________

 

Lecture by Dr. Ebrahim Bagheri (AU School of Computing and Information Systems):

 

ON MARRYING QUALITY ENGINEERING AND LARGE-SCALE SOFTWARE REUSE

Thursday January 26, 2012, 10:00am, EV11.119

 

Abstract:

Today's world economy relies heavily on large-scale software infrastructures that facilitate the seamless integration of information and enable the smooth interaction of heterogeneous systems. The development of such software infrastructure requires the investment of tremendous amounts of resources. For instance, the United States alone spends around $250B annually on software development projects. However, only a very small portion of these investments are actually fruitful and at the end, over 80% of these projects do not meet the set expectations. In this talk, I will capitalize on three main pivotal characteristics for software development, namely agility, quality of products and systems' scale that can be influential on the success or failure of software development endeavors. I will discuss how the weaving of quality engineering techniques into the software product line engineering paradigm can result in effective impacts on the software development process. The details of a decision support platform incorporating semantic Web, natural language processing and visualization techniques for enhancing the quality of software product line engineering artifacts and its observed empirical impact on software developers will be further discussed.

Bio:

Ebrahim Bagheri is currently an Assistant Professor at the AU School of Computing and Information Systems and a Visiting Professor with University of British Columbia. He also enjoys an IBM CAS Faculty Fellowship and an Honorary Research Associate appointment at the University of New Brunswick. Ebrahim specializes in topics ranging from the meta-modeling of complex interconnected systems to collaborative information systems design. Currently, his research focuses on two areas namely, quality engineering for software product lines; and Knowledge management for enterprise engineering. His work on collaborative modeling is one of a kind in providing tools and techniques for collaborative risk management and quality engineering. He has extensively published over 80 papers in top-tier journals and conferences and has served as Program Committee Chair and Member of several international conferences and workshops.
He can be reached at http://ebagheri.athabascau.ca/

 

________________________________________________

 

Lecture by Dr. Daniel Sinnig (Desjardins Technology Group) :

 

USE CASE MODELING: CURRENT TRENDS AND FUTURE RESEARCH

Tuesday January 24, 2012, 10:00am, EV11.119

 

Abstract:

Use case modeling has become a part of mainstream software engineering practice as a key activity in conventional software development processes. When written correctly, the use case model has the potential to drive all subsequent development work and serves as a reference point for maintenance and documentation purposes. Writing effective and well-structured use cases, however, is a difficult task which requires a thorough understanding of the concepts and techniques involved. Current practice has shown that it is easy to misuse them or to make mistakes that can render them useless at best and result in the propagation of incorrect requirements in many cases.

In this presentation, I survey a number of best practices, praxis-proven templates and guidelines for writing effective use cases. I continue with a review of my past and current research in the area of use case semantics, test case generation, and merging of use case models. I then examine their interrelation with non-functional requirements such as user interface and business transaction requirements. I conclude by presenting a vision of a unified model for software requirements which is followed by a discussion of anticipated research in the area of requirements engineering.

Bio:

Dr. Sinnig is a Senior Consultant in Application Security at the Desjardins Technology Group. He holds a PhD in Computer Science from Concordia University and completed his post-doctoral tenure at the University of Rostock (Germany). His research interests lie in software engineering and human-computer interaction, with a particular focus on unifying theories and models that can bridge both disciplines. Dr. Sinnig is a co-author of the ISO 27034 standard on application security and a member and officer of the IFIP 13.2 working group on Methodologies for User-Centered Systems Design. He has held various awards and scholarships including the NSERC PGS and PDF awards and received the Concordia University Doctoral Prize in Engineering and Computer Science for the 2009 academic year.

 

________________________________________________

 

 

Lecture by Dr. Emil Vassev (University of Limerick, Ireland):

 

ENGINEERING SELF-ADAPTIVE SYSTEMS - CHALLENGES AND APPROACHES

Tuesday, January 17, 10:00am, EV3.309

 

Abstract:

A self-adaptive system changes its behaviour in response to stimuli from its execution and operational environment. As software is used for more pervasive and critical applications, support for self-adaptation is increasingly seen as vital in avoiding costly disruptions for repair, maintenance and evolution of systems. However, the wider use of self-adaptive systems in a variety of domains also leads to more challenges in designing and developing them. Self-adaptation may result in changes to some functionality, algorithms, or system parameters, as well as to the system's structure or any other system aspect. Moreover, an autonomic self-adaptive system has intrinsic intelligence that may help it reason about situations where autonomous decision making is required

In this talk, I briefly survey some of my past, ongoing and future research that strives to meet the challenge of developing autonomic self-adaptive systems. The talk spans over multiple projects and covers: expressing self-* requirements; modeling autonomic systems; developing software-verification mechanisms for self-adaptive systems; handling uncertainty in self-adapting behaviour; knowledge representation and reasoning for cognitive systems; and awareness.

Bio:

Dr. Emil Vassev received his M.Sc. in Computer Science (2005) and his Ph.D. in Computer Science (2008) from Concordia University, Montreal, Canada. Currently, he is a research fellow at Lero (the Irish Software Engineering Research Centre) at University of Limerick, Ireland where he is: 1) leading the Lero participation in the ASCENS European FP7 project; 2) leading the Lero's joint project with ESA on Autonomous Software Systems Development Approaches; and 3) participating in the FastFix European FP7 project and in the MODEVO project. Dr. Vassev's current research focuses on knowledge representation and self-awareness for self-adaptive systems. More broadly, his research interests are in software development methodologies for developing autonomic systems. Dr. Vassev holds a USA NASA Patent on "Method of Improving System Performance and Survivability through Self-sacrifice".

 

________________________________________________

 

Lecture by Dr. Frank Rudzicz (University of Toronto) :

 

FIRST, WE SHAPE OUR TOOLS: HOW TO BUILD A BETTER SPEECH RECOGNIZER

Friday, December 2, 14:00, EV003.309

 

Abstract:

In this talk I briefly survey some of my previous research and then even more briefly extrapolate as to future extensions of this work. I will talk about improving Automatic Speech Recognition (ASR) for speakers with speech disabilities by incorporating knowledge of their speech production. This involves the acquisition of the TORGO database of disabled articulation which demonstrates several consistent behaviours among speakers, including predictable pronunciation errors. Articulatory data are then used to train augmented ASR systems that model the statistical relationships between the vocal tract and its acoustic effluence. I show that dynamic Bayesian networks augmented with instantaneous articulatory variables outperform even discriminative alternatives. This leads to work that incorporates a more rigid theory of speech production, i.e., task-dynamics, that models the high-level and long-term aspects of speech production. For this task, I devised an algorithm for estimating articulatory positions given only acoustics that significantly outperforms the former state-of-the-art. Finally, I present ongoing work into the transformation of disabled speech signals in order to make them more intelligible to human listener and I conclude with some thoughts as to possible paths we may now take.

Bio:

Frank Rudzicz received his PhD in Computer Science from the University of Toronto in 2011, his Master’s degree in Electrical and Computer Engineering from McGill University in 2006 and his Bachelor’s in Computer Science at Concordia University in 2004 He is the recipient of a MITACS Accelerate Canada award, a MITACS Industrial Elevate award, and an NSERC Canada Graduate Scholarship. His expertise includes parsing in natural language processing, acoustic modelling, multimodal interaction, and speech production.

 

________________________________________________

 

Lecture by Wojciech Szpankowski (Dept. Computer Science, Purdue University, IN) :

 

ALGORITHMS, COMBINATORICS, INFORMATION, AND BEYOND

Monday, October.3, 2011, 10:30am, EV003.309 (open to all)

View high-resolution version of poster here.

________________________________________________

Lecture by Professor G.C. Sharma (Department of Mathematics and Computer Science, Institute of Basic Science, Agra) :

 

PERFORMANCE ANALYSIS OF MANUFACTURING SYSTEM WITH STANDBYS, WORKING VACATION AND SERVER BREAKDOWN

Wednesday, July 6, 2011, 13:00, EV003.309


Abstract:

This investigation is concerned with queueing model for the performance analysis of manufacturing system with standbys, working vacation and server breakdown. As soon as an operating unit fails, it is immediately replaced by a standby unit for the smooth running of the manufacturing system. When there is no failed unit in the system, the server goes on vacation; in the meanwhile, the server performs some work and is called on working vacation. The life time and the repair time of the manufacturing units are assumed to be exponentially distributed. The matrix geometric method is used to evaluate various performances such as the expected number of failed units and the expected number of operating units in the manufacturing system, machine availability, operating utilization, etc.. The cost function is established to maximize the gain. The sensitivity analysis is also carried out to examine the effect of different parameters on various system characteristics.

Bio:

Professor G.C. Sharma was Ex-Professor and Head, Department of Mathematics and Computer Science, Institute of Basic Science, Agra. He was the founder director of “Seth Padam Chand Jain Institute of Commerce, Business Administration and Economics”, “Institute of Vocational Education” and “Institute of Engineering and Technolog” of Dr. B. R. Ambedkar University, Agra. More than 50 students received their Ph. D. Degree under his supervision. More than 150 research papers and 20 books are to his credit. His area of research includes queueing and reliability models, computational fluid dynamics, Bio-informatics, etc. At present he is actively engaged in the interdisciplinary research of modeling of humane diseases namely HIV, TB, Malaria, cancer, etc.

________________________________________________

 

Lecture by Dr Madhu Jain (Department of Mathematics, Indian Institute of Technology, Roorkee, India) :

 

QUEUEING MODELLING AND OPTIMAL CHANNEL ALLOCATION IN WIRELESS COMMUNICATION NETWORKS

Tuesday, July 5, 2011, 13:00, EV003.309


Abstract:

Wireless communication networks need utilization of their channels to achieve a desired goal. The problem of allocating the channels in an efficient manner in order to get maximum output is of vital importance. In the present investigation, optimal channel allocation scheme in cellular radio system is suggested by reserving a specific number of channels for handoff calls to give them priority in comparison to new calls. The provision of sub-rating and buffer is made. The calls are assumed to arrive in Poisson fashion whereas the service times along with cell residence time are exponentially distributed. To establish steady state indices, product method is employed by balancing the in-flow and out-flow rates. Runge-Kutta (R-K) technique is used to obtain the solution of the system of transient equations. Various performance indices are also established in terms of transient probabilities. The sensitivity analysis is also carried out to examine the effects of various system parameters on the performance measures.

Bio:

Dr Madhu Jain is a faculty in the Department of Mathematics, Indian Institute of Technology Roorkee, India. She is recipient of two gold medals at M. Phil. level. There are more than 200 research publications in reputed journals including Applied Mathematical Modeling, Applied Mathematics and Computation, Computers and Operations Research, Computers in Biology and Medicine, etc. She was conferred Young Scientist Award, Department of Science and Technology (India) and Career Award, University Grant Commission (India). Her current research interest includes queueing theory, stochastic models, software reliability, wireless communication, Bio-informatics, etc.

________________________________________________

Lecture by Narsingh Deo (Director, Center for Parallel Computation at University of Central Florida) :

 

DESIGNING MULTIPLE-FAULT TOLERANT RAID'S: A GRAPH-THEORETIC ALGORITHM FOR DATA AND PARITY PLACEMENT

Monday, July 4, 2011 from 10:00 - 11:00am in EV3.309


Abstract:

Redundant Arrays of Independent Disks (RAID) systems have come into widespread use because of their enhanced I/O bandwidths, large capacities, and low cost. However, the increasing demand for greater array capacities at low cost has led to the use of arrays with larger and larger number of disks, which increases the likelihood of the concurrent occurrence of two or more random disk failures. Hence the need for RAID systems to tolerate two or more random disk failures without compromising disk utilization. In this talk, we will present a novel algorithm based on the perfect 1-factorization of the complete graphs KP and K2P – 1 for placing data and parity in two-disk fault-tolerant arrays with (Pk) and (2P – 1 – k) disks respectively, where P is a prime number and k ≥ 1. Furthermore, we determine the fraction of space used for storing parity in such arrays and show that this fraction has the optimal value when k = 1.

Bio:

Narsingh Deo is the Millican Chair Professor of Computer Science and Director of the Center for Parallel Computation at University of Central Florida.  A Fellow of the IEEE and a Fellow of the ACM, Prof. Deo has authored four textbooks and over 200 refereed papers on graph theoretic algorithms, combinatorial computing, discrete optimization, and parallel computation.

________________________________________________

 

Lecture by T.S. Mohan (Principal Researcher, Infosys Technologies E&R ECom Research Lab) :

 

THE GRAND CHALLENGES IN SOFTWARE ENGINEERING - PERSPECTIVES FROM THE TRENCHES

Tuesday, June 7, 2011 from 10:30-11:30AM in EV003.309


Abstract:

While Software Engineering traditionally has not been widely popular amongst the industrial software development practitioners,its maturity and need is being felt by these practitioners all the more. Is it that the field of Software Engineering is stagnating? An analysis of the paper presentation by topic in the recently concluded ICSE 2011 conference shows the disturbing imbalances in the priorities of the Software Engineering research being pursued. In this talk, which was delivered at the ICSE'2011 conference, we highlight the role played by Grand Challenge initiatives and the need for it in contemporary software engineering research.We highlight these grand challenge opportunities in six areas of advanced software engineering - more so in the context of Cloud Computing, ubiquitous networked smart devices, social networks, rapid software development, compositionality of components and services and secure testing and validation.

Bio:

T.S. Mohan works at Infosys Technologies E&R’s ECom Research Lab as a Principal Researcher. His research interests include distributed systems, high performance computing, cloud and grid as well as software architecture and Software engineering. He has over 22 years experience in the academia and industry. T.S. Mohan holds a Master and PhD in computer science from the Indian Institute of Science, Bangalore where he worked for about a decade before moving into the industry. He was a young visiting scientist in the Lab for Computer Science, MIT in 1988 and a visiting scientist in NEC Research Institute, Princeton in the summer of 1994. He pursued his entrepreneurial interests in Bangalore in advanced computing technologies for about 6 years before joining Infosys. He is the Co-Chair of the Software Engineering in Practice Track of International Conference on Software Engineering (ICSE) 2011 Conference as well as Co-Chair of the International Workshop on Software Engineering for Cloud Computing, 2011 and International Workshop on Future of Software Engineering in/for Cloud Computing 2011 (FoSEC 2011).

________________________________________________

Lecture by Anant Madabhushi (Department of Biomedical Engineering, Rutgers University) :

 

DIGITAL AND INTEGRATED DIAGNOSTICS: CHALLENGES AND OPPORTUNITIES

Wednesday, March 16, 2011 from 10:00-11:00am in EV2.260


Abstract:

With the advent of digital pathology, imaging scientists have begun to develop computerized image analysis algorithms for making diagnostic (disease presence), prognostic (outcome prediction), and theragnostic (choice of therapy) predictions from high resolution images of digitized histopathology. One of the caveats to developing image analysis algorithms for digitized histopathology is the ability to deal with highly dense, information rich datasets; datasets that would overwhelm most computer vision and image processing algorithms. Over the last decade, manifold learning and nonlinear dimensionality reduction schemes have emerged as popular and powerful machine learning tools for pattern recognition problems. However, these techniques have thus far been applied primarily to classification and analysis of computer vision problems (e.g., face detection). In this paper, we discuss recent work by our group in the application of manifold learning methods to problems in computer aided diagnosis, prognosis, and theragnosis of digitized histopathology. In addition, we discuss some exciting recent developments in the application of these methods for multi-modal data fusion and classification; specifically the building of meta-classifiers by fusion of histological image and "omics" signatures for prostate and breast cancer outcome prediction.

Bio:

Dr. Anant Madabhushi is the Director of the Laboratory for Computational Imaging and Bioinformatics (LCIB), Department of Biomedical Engineering, Rutgers University. Dr. Madabhushi received his Bachelors Degree in Biomedical Engineering from Mumbai University, India in 1998 and his Masters in Biomedical Engineering from the University of Texas, Austin in 2000. In 2004 he obtained his PhD in Bioengineering from the University of Pennsylvania. He joined the Department of Biomedical Engineering, Rutgers University as an Assistant Professor in 2005. He was promoted to Associate Professor with Tenure in 2010. He is also a member of the Cancer Institute of New Jersey and an Adjunct Assistant Professor of Radiology at the Robert Wood Johnson Medical Center, NJ. Dr. Madabhushi has authored over 110 peer-reviewed publications in leading international journals and conferences. He has one patent, 9 pending, and 5 provisional patents in the areas of medical image analysis, computer-aided diagnosis, and computer vision. He is an Associate Editor for IEEE Transactions on Biomedical Engineering, IEEE Transactions on Biomedical Engineering Letters, BMC Cancer, and Medical Physics. He is also on the Editorial Board of the Journal Analytical and Cellular Pathology. He has been the recipient of a number of awards for both research as well as teaching, including the Busch Biomedical Award (2006), the Technology Commercialization Award (2006), the Coulter Phase 1 and Phase 2 Early Career award (2006, 2008), the Excellence in Teaching Award (2007-2009), the Cancer Institute of New Jersey New Investigator Award (2007, 2009), the Society for Imaging Informatics in Medicine (SIIM) New Investigator award (2008), and the Life Sciences Commercialization Award (2008). He is also a Wallace H. Coulter Fellow and a Senior IEEE member. His research work has received grant funding from the National Cancer Institute (NIH), New Jersey Commission on Cancer Research, the Society for Imaging Informatics, the Department of Defense, and from Industry.

 

________________________________________________

 

Lecture by Gene Cooperman:

 

TEMPORAL DEBUGGING VIA FLEXIBLE CHECKPOINTING:  CHANGING THE COST MODEL

Monday, February 21, 2011 at 13:00 in EV002.260


Abstract:
Debugging semantic errors remains one of the most time-consuming, and
sometimes frustrating, efforts in developing and maintaining programs.
A semantic error is uncovered, and the programmer then begins multiple
iterations within a debugger in order to build up a hypothesis about
the original program fault that caused the error.  Examples of semantic
errors include segmentation fault, assertion failure, infinite loop,
deadlock, livelock, and missing synchronization locks.

This talk describes a debugging approach based on a reversible debugger,
sometimes known as a time-traveling debugger.  This is a more natural
approach, since it allows a programmer during a single program run to work
backwards from semantic error to earlier fault, and still earlier to the
original causal fault.  A new tool, reverse expression watchpoints, allows
one to begin with a program error and an expression that has an incorrect
value, and automatically bring the programmer backwards in time to a point
at which the expression first took on an incorrect value.  This tool is
part of a long-term project in which a series of such tools is planned,
each tool customized for a different class of semantic errors.

The long-term goals described here are motivated by an analogy between
syntax errors and semantic errors:

* Currently, syntax errors are easily diagnosed by compilers that bring
the programmer directly to the line number, withing a textual program,
that led to the bad syntax.

* In the future, semantic errors will be easily diagnosed by a new class
of reversible debugger tools that bring the programmer directly to the
point in time, within a familiar debugging environment, that led to the
later semantic error.

The reversible debugger is itself based on a fast, transparent checkpointing
package for Linux:  DMTCP (Distributed MultiThreaded CheckPointing).
DMTCP can checkpoint such varied programs as Matlab, OpenMPI, MySQL,
Python, Perl, GNU screen, Vim, Emacs, and most user-developed programs,
regardless of the implementation language.  No kernel modification or
other root privilege is needed.  Of particular interest for this talk
is the ability of a customized version of DMTCP to checkpoint an entire
gdb session.  The reversible debugger also supports weak determinism for
purposes of debugging multi-threaded programs.  The current implementation
has been demonstrated robust enough to run such large, real-world programs
as MySQL and Firefox.

Bio:
Gene Cooperman received his Ph.D. from Brown University in 1978.  He spent
two years as a post-doc, followed by six years at GTE Laboratories.
He has been a professor at Northeastern University since 1986, and
a full professor since 1992.  His interests lie in high performance
computation and symbolic algebra.  He has developed Task-Oriented
Parallel C (TOP-C/C++), a model for writing parallel software easily.
More recently, he has worked with novel applications of transparent
checkpointing, such as checkpointing symbolic debuggers and checkpointing
individual graphics-based processes with a graphics desktop.  His DMTCP
checkpointing project provides a robust platform for this purpose, while
not requiring modifications to the application or kernel/run-time library.
His disk-based parallel computation project (joint with Daniel Kunkle)
is based on the Roomy language extension, and translates traditional
RAM-intensive computations into scalable computations based on parallel
disks.  Finally, he works on the semi-automatic source-level translation
of single-threaded task-oriented programs into multi-threaded programs
with a small footprint.  This work is an important focus of his ongoing
collaboration with CERN, and the work is motivated by the requirements
of future many-core CPU chips.  He leads the High Performance Computing
Laboratory at Northeastern University, where he currently advises four
PhD students.  He has over 80 refereed publications.

________________________________________________

Lecture by David Wessel (CNMAT, University of California Berkeley):

 

PARALLELIZATION OF MUSIC AND AUDIO APPLICATIONS

November 26, 2010 at 1:00PM in EV2.260


Abstract:
Multi-core processors are now common but musical and audio applications
that take advantage of multiple cores are rare. The most popular music
software programming environments are sequential in character and
provide only a modicum of support for the efficiencies to be gained from
parallelization. We provide a brief summary of existing facilities in
the most popular languages and provide examples of parallel
implementations of some key algorithms in computer music such as
partitioned convolution and non-negative matrix factorization NMF.We
follow with a brief description of the SEJITS approach to providing
support between the productivity layer languages used by musicians and
related domain experts and efficient parallel implementations.We also
consider the importance of I/O in computer architectures for music and
audio applications. We lament the fact that current GPU architectures as
delivered in desk and laptop processors are not properly harnessed for
low-latency real-time audio applications.

Bio:
From his high school years onwards, David Wessel's musical
activities were central to his life and after his PhD in Psychology he
committed himself to blending his science and technology skills with his
musical interests.In 1976, at the invitation of Pierre Boulez, he moved
to Paris to work as a researcher at the then nascent /Institut de
Recherche et Coordination Acoustic/Musique/IRCAM where he remained until
1988. For his work at IRCAM he was recognized as /Chevalier dans l'Ordre
des Arts et des Lettres/by the French Minister of Culture.

In 1988, he arrived at UC Berkeley as Professor of Music with the charge
of building the interdisciplinary Center for New Music and Audio
Technologies (CNMAT).He organized CNMAT as a laboratory wherein both
science and technology people interact on daily basis with musicians.
Wessel insists on an instrumental conception – the computer as musical
instrument equipped with gesture sensing devices and sound diffusion
systems.


________________________________________________


Lecture by Dr. Peter Grogono on:

The Unbearable Dizziness of Rotating: A Mathematical History Tour

Date and Location October 13th, Room EV3.309, 1:00 PM

We start the tour with observations about the implementation of rotation in current three-dimensional graphics programming.  Standard texts convey the impression that the mathematics is conventional and that the main problem is to find a compromise between performance, precision, and numerical stability.  The actual situation is more interesting.

The foundations of modern graphics programming were laid in the mid-nineteenth century by mathematicians such as Hamilton, Cayley, and Gibbs.  Presentations create the illusion of coherence and completeness, but  closer inspection of the original work reveals gaps, oddities, and a curious link between fermions, bosons, and Balinese candle dancers.

Other nineteenth-century mathematicians, such as Grassmann, Rodrigues, Clifford, and Lie, produced consistent and elegant systems that, for various reasons, have not achieved the attention that they deserve in graphics and other fields.  However, bits and pieces of these systems have been exploited by physicists for many years.  Recently, there have been efforts to rebuild mechanics and physics on a single algebraic foundation.  Algebraic techniques have also been introduced into graphics programming and may eventually come to dominate the field.  We end the tour with a glimpse of a possible future for rotation in graphics programming.

_________________________________________________

 

Dr. Yue Lu from the Department of Computer Science and Technology, East China Normal University is giving a lecture titled :

Applications of Document Image Recognition to Postal Automation

Date and time: August 10th, 10:00a.m EV11.119

 

________________________________________________

 

Speaker: Dieter Rautenbach, Technische Universitaet Ilmenau

Title: Betweennesses induced by Forests and Set Systems - Structures
and Algorithms

Date and time: Friday 21 May, 15:00 in room EV2.260



Abstract:

We study so-called betweennesses induced by graphs
as well as set systems. Algorithmic problems related
to betweennesses are typically hard. They have been
studied as relaxations of ordinal embeddings and occur
for instance in psychometrics and molecular biology.
Our contributions are hardness results, efficient
algorithms, and structural insights such as complete
axiomatic characterizations.

This is joint work with V. Santos, P.M. Schaefer, and J.L. Szwarcfiter

 

___________________________________________________

 

IEEE Women In Engineering (WIE) Mentoring Program

lace:

EV3.309, CSE Department, Concordia University
1515 Ste Catherine West, 3d floor
Montreal, Quebec H3G 1M8

Abstract:

The original Mentor is a character in Homer?s epic poem * The Odyssey * . When Odysseus,
King of Ithaca, went to fight in the Trojan War, he entrusted the care of his kingdom to
Mentor. Mentor served as the teacher and overseer of Odysseuss? son, Telemachus.
In today's corporate nomenclature, mentorship refers to the relationship in which a more
experienced or more knowledgeable person helps a less experienced or less knowledgeable
person ? often referred as protégé or mentee. However there are many avenues to mentor or
be mentored. This talk will discuss the speaker's experiences with mentoring.


Speaker:  Jennifer Ng , IEEE WIE Ottawa.

Biography:

Jennifer obtained her Bachelor of Electrical Engineering from McGill University ,
Montréal, Canada (B.Eng ?94) and recently moved back to Canada after  decade in the US.
She works in Regulatory Affairs for Medical Devices at Abbott Point of Care  in Ottawa.
Jennifer has been a member of IEEE  since 1990 and became a member of Women In
Engineering (WIE ) in 1996.  She has been involved in mentoring students (McGill Mentor
Program) as well as peer IEEE members (IEEE mentoring service) over the past several
years. For her full biography, go to http://www.jenniferng.org

For more information, please, visit the IEEE WIE Montreal website at

http://users.encs.concordia.ca/~ormandj/WIE-Montreal.html

___________________________________________________

 

Invited Speaker Dr. Herbert Freeman

" Labeling the Features of a Map Solving a Problem that was Thought to be beyond Computer Solution "

DATE : Thursday, October 15 th , 2009, TIME: 5:45 p.m., LOCATION: EV 3.309

------------------------------------------------------------------------------------------------------------

Date: July 03, 2009 at 10:30 AM

Location: EV003.309, 1515 St Catherine Street West

Speaker: Prof. K. K. Biswas

Title: Recognizing individuals from the energy component of their walks

Abstract:

Image based human recognition methods  such as fingerprints, palms,
face,  ear, iris etc. require the subject to cooperate to provide the
relevant data. Recently gait has emerged as a new biometrics which is
non-obtrusive in nature and concerns recognition of individuals by the
way they walk. The spatial and temporal shape of motion of an individual
is usually the same for all gait cycles and is considered to be unique
to that individual. This talk will  present schemes which make use of
gait energy image  representation. This basically involves capturing the
human motion in a single image while preserving the temporal gait
characteristics of the individual. The image does get disturbed when the
subject is carrying a bag or wearing an overcoat. We shall illustrate
how these effects can be minimized by using the spatio-temporal motion
features through results on a large gait data set.



Bio:

Dr. K. K. Biswas is Professor in the conmputer science and engineering
department of  IIT Delhi, India since 1988. He has extensive research
experience in the areas of image processing and computer vision. His
primary areas of research include Fuzzy logic for content based image
retrieval, Video Segmentation & Categorization, Gait recognition
technology for biometrics and Soft computing based activity recognition
in video clips. He was visiting faculty in University of Central Florida
during the period 2003-2007 and is member of the editorial board of
international journals in his research field.

 

______________________________________________________________________________________

 

Date: June 16, 2009 at 18:00 PM

Location: EV003.309, 1515 St Catherine Street West

Speaker: Emil Vassev, University College Dublin

Title: Engineering Autonomic Systems with ASSL

Abstract:

Since its introduction in 2001 by IBM, autonomic computing has inspired many initiatives for self-management of complex systems. The Autonomic System Specification Language (ASSL) is an initiative that provides a framework for the specification, validation, and code generation of autonomic systems. A formal method dedicated to autonomic computing, ASSL helps researchers with problem formation and system design, analysis, evaluation, and implementation. The ASSL formal notation is a hierarchical specification model defined through formalization tiers. The framework provides a toolset that developers can use to edit and validate ASSL specifications and generate Java code. The current validation approach is a form of consistency checking performed against a set of semantic definitions. Currently, different verification mechanisms for automatic reasoning are under development, such as model checking support for both specification and post-implementation phases of the software lifecycle. ASSL has been successfully used to make existing and prospective complex systems autonomic. Here, autonomic properties have been specified and prototype models have been generated for two NASA projects ? the Autonomous Nano-Technology Swarm concept Mission and the Voyager mission.

 

_________________________________________________________________________________________________

 

Date: 4th, June 2009 at 10:00 AM

Location: EV003.309, 1515 St Catherine Street West

Speaker: Dr. Chi Hau Chen, University of Massachusetts Dartmouth

Title: Signal Processing in Pattern Recognition

Abstract:

While the progresses in pattern recognition and signal processing have been going on nearly in parallel, in the past 50 years, the convergence of the two fields has been quite evident, especially on using signal (image) processing and modeling in preprocessing and feature extraction for pattern recognition. A good example is the transform methods in signal (image) processing which are used extensively in pattern recognition. In this talk we will examine the signal processing in pattern recognition applications with seismic, sonar and ultrasonic testing signals as well as remote sensing images. Special focus is placed on statistical pattern recognition issues in remote sensing. While pattern recognition applications are so diverse, signal processing has provided a common step toward building more effective pattern recognition systems. ------ Chi Hau Chen received his Ph.D. in electrical engineering from Purdue University in 1965. He has been a faculty member with the University of Massachusetts Dartmouth (UMass Dartmouth) since 1968 where he is now Chancellor Professor. He was the director of NATO Advanced Study Institute on Pattern Recognition and Signal Processing, held at ENST, Paris, 1978. Dr. Chen was the Associate Editor of IEEE Trans. on Acoustics, Speech and Signal Processing from 1982 to 1986, Associate Editor on information processing for remote sensing of IEEE Trans. on Geoscience and Remote Sensing 1985 to 2000. He is an IEEE Fellow 1988, Life Fellow 2003, and also a Fellow of International Association of Pattern Recognition (IAPR) 1996. He has been an Associate Editor of International Journal of Pattern Recognition and Artificial Intelligence since 1985, and on the Editorial Board of Pattern Recognition Journal since 2009. In addition to the remote sensing and geophysical applications of statistical pattern recognition, he has been active with the signal and image processing of medical ultrasound images as well as industrial ultrasonic data for nondestructive evaluation of materials He has published 25 books in his areas of research interest.

 

_________________________________________________________________________________________________

Date: 15, April 2009 at 12:00

Location: EV002.184, 1515 St Catherine West, Montreal

Speaker: U.S.R. Murty

Title: The Perfect Matching Polytope and Solid Bricks

Abstract:

The perfect matching polytope of a graph G, denoted here by Poly(G), is the convex hull of the set of incidence vectors of perfect matchings of G. Edmonds (1965) showed that a vector x in R^E belongs to the perfect matching polytope of G if and only if it satisfies theinequalities: (i) x \geq 0 (non-negativity), (ii) x(\partial(v)) = 1, for all v in V (degree constraints) and (iii) x(\partial(S))\geq 1, for all odd subsets S of V (odd set constraints). We are interested in the problem of characterizing graphs whose perfect matching polytopes are determined by non-negativity and the degree constraints. It is well-known that bipartite graphs have this property. An graph is an Edmonds graph if the description of Poly(G) requires at least one odd set constraint. The Edmonds Graph Recognition Problem (EGP) is the problem of recognizing if a given graph is an Edmonds graph. By Edmonds? Theorem, EGP is in NP. We showed that for planar graphs EGP is in P. But, in general, we do not even know if EGP is in co-NP. In this talk I shall present a characterization of Edmonds graphs. A class of graphs known as solid bricks arise as important examples of non-Edmonds graphs.

Based on joint work with M. H. de Carvalho and C.L. Lucchesi. .**


 

Date: April 23, 2009 at 14:30

Location: EV002.184, 1515 St Catherine West, Montreal

Speaker: T.C. Nicholas Graham

Title: Supporting Adaptive Mobile Collaboration

Abstract:

Recent years have seen a proliferation of exciting new mobile devices, such as Smartphones, Netbooks and ultra-light laptops. These provide ever more ways for people to communicate and collaborate on the go. Programming collaborative applications over mobile devices is challenging, as such applications must be high- performance, robust in the presence of failure (such as batteries dying or losing network connection), and easy to use in a mobile environment.

In this talk, I will present Fiia, a middleware toolkit aiding the development of collaborative applications in a mobile setting. Fiia's approach is model-based, allowing developers to manipulate a high-level conceptual model of their system, while a runtime refinery automatically resolves issues of distribution and partial failure.

Fiia has been used to develop systems as diverse as a collaborative game prototyping environment, a smartphone-based presentation tool, and a tabletop-based furniture sales system.


 

Colloquium Series in Computer Science and Software Engineering

Speaker: Dr. Vasek Chvatal

Date: Monday, March 2, 2009

Location: EV 3.309, 1515 St Catherine St, Montreal

Abstract:

A point in the plane is said to lie between points A and C if it is the interior point of the line segment joining A and C. In his development of geometry, Euclid neglected to give the notion of betweenness the same axiomatic treatment that he gave, for instance, to the notion of equality. This omission was rectified twenty-two centuries later by Moritz Pasch: http://www-groups.dcs.st-and.ac. uk/~history/Biographies/Pasch.html

During the twentieth century, geometric betweenness was generalized in diverse branches of mathematics to ternary relations of metric betweennes, lattice betweenness, and algebraic betweenness. I will talk about three settings where such abstract betweennesses show up.

The first of these settings is ordered geometry; there, primitive notions of points and lines are linked by the relation of incidence and by axioms of betweenness; two classic theorems here are the Sylvester-Gallai theorem http://mathworld.wolfram.com/SylvestersLineProblem.html
and the de Bruijn-Erdos theorem. I conjectured in 1998 http://users.encs.concordia.ca/~chvatal/newsg.pdf and Xiaomin Chen proved in 2003 http://dimacs.rutgers.edu/TechnicalReports/abstracts/2003/2003-32.html that the  Sylvester-Gallai theorem generalizes to metric spaces when lines in these spaces are defined right; together, we conjectured http://arxiv.org/abs/math.CO/0610036 that the de Bruijn-Erdos theorem also generalizes to metric spaces when lines in these spaces are defined right (with "right" having a different sense in each of the two instances); the two of us and Ehsan Chiniforooshan have partial results on this conjecture.

The second of the three settings is abstract convexity; there, families of sets called "convex" obey certain axioms. Such finite structures are called convex geometries when they have the Minkowski-Krein-Milman property: every set is the convex hull of its extreme points. Two classical examples of convex geometries come from shelling of partially ordered sets and simplicial shelling of triangulated graphs. Last June I characterized, by a five-point condition, a class of betweennesses generating a class of convex geometries that subsumes the two examples. http://users.encs.concordia.ca/~chvatal/abc.pdf Laurent Beaudou, Ehsan Chiniforooshan, and I have additional results on such betweennesses.

The last setting lies between physics and philosophy: in his effort to develop a causal theory of time, Hans Reichenbach http://en.wikipedia.org/wiki/Hans_Reichenbach introduced the notion of causal betweenness, which is a ternary relation defined on events in probability spaces. This January, Baoyindureng Wu and I characterized, by easily verifiable properties, abstract ternary relations isomorphic to Reichenbach's causal betweenness.
http://arxiv.org/abs/0902.1763

A nice connection with a 1979 theorem of Jarda Opatrny
http://dx.doi.org/10.1137/0208008
appears here.

The joint work with Laurent Beaudou, Ehsan Chiniforooshan, and Baoyindureng Wu was done in our research group ConCoCO http://users.encs.concordia.ca/~concoco/
(Concordia Computational Combinatorial Optimization).

Biography:

Vasek Chvatal got his PhD in mathematics from the University of Waterloo in 1970. Before joining Concordia in June 2004 as its first Canada Research Chair in Tier 1, he taught mathematics, operations research, and computer science at McGill, Stanford, Universite de Montreal, and Rutgers. Information about his research is available at http://users.encs.concordia.ca/~chvatal/


 

Past Events

  • April 14, 2009 - Defence given by Alina Andreevskaia "Sentence-Level Sentiment Tagging Across Different Domains"
  • April 9, 2009 - Defence given by Amani Jamal "A UML Framework for OLAP Conceptual Modeling"
  • April 9, 2009 - Defence given by Ai Hua WU "OO-IP Hybrid Language Design and a Framework Approach to the GIPC"
  • April 8, 2009 - Defence given by Lehan Meng "Multichannel Optical Access Networks: Design and Resource Management"
  • April 8, 2009 - Defence given by Mitra Nami "ELIDE: An Interactive Development Environment for the Erasmus Language"
  • April 6, 2009 - Defence given by Ruhan Sayeed "High Performance Analytics with the R3-cache"
  • April 6, 2009 - Defence given by Shafique Ahmed "Mining Software Repositories to Support Software Evolution"
  • April 3, 2009 - Defence given by Muhammad Ismail Shah "A Novel Image Matching Approach for Word Spotting"
  • April 2, 2009 - Defence given by Dania El-Khechen "Decomposing and Packing Polygons"
  • April 2, 2009 - Defence given by Fuzhi Chen "Visual Representation of a Customizable Software Maintenance Process Model"
  • April 1 , 2009 - Defence given by Philon Nguyen "Fast and Scalable Similarity and Correlation Queries on Time Series Data"
  • March 31, 2009  - Defence given by Tahira Hasan "Finding Usage Patterns from Generalized Weblog Data"
  • March 30, 2009 - Defence given by Wumo Pan "Pattern Detection and Recognition using Over-Complete and Sparse Representations"
  • March 10, 2009 - Defence given by Gopinatha Jakadeesan "FT-PAS – A Framework for Pattern Specific Fault-Tolerance in Parallel Programming"
  • March 12, 2009 - Defence given by Lorenzo Luciano "An Automated Multimodal Face Recognition System Based on Fusion of Face and Ear"
  • March 26, 2009 - Defence given by François Coallier "What Does Engineering Mean in 'Software Engineering'?"
  • February 4, 2009 - Defence given by Vahid Safar Nourollah "Automated Building Monitoring using a Wireless Sensor Network"
  • February 9, 2009 - Defence given by Ahmed Alasoud on "A Multi-Matching Technique for Combining Similarity Measures in Ontology Integration
  • February 2, 2009 - Seminar on "Secure Multicast Communications"presented by Dr.William Atwood.
  • January 22, 2009 - Defence given by Chen Na Lian on "Fast Computation of Supermaximal Repeats in DNA Sequences".
  • December 22nd, 2008 - Seminar on"Biometrics and Gait Recognition", presented by Dr. Dr. Mounim A. El-Yacoubi , University in Casablanca, Morocco.
  • December 22, 2008 - Defence given by Naseem Ibrahim on "Transforming Architectural Descriptions of Component-based
  • December 16, 2008 - Defence given by Jin Zan Lai on "Query Processing and Optimization in Deductive Databases with Certainty Constraints"
  • December 09 , 2008 - Seminar on"Linking Movie Data on the Web with LinkedMDB", presented by Dr. Mariano Consens, Information Engineering, MIE and CS, University of Toronto.
  • December 08, 2008 - Defence given by Hong Fei Zhu "Regression Test Selection for Distributed Java RMI Programs by Means of Formal Concept Analysis"
  • December 05, 2008 - Defence given by Daniel Sinnig on "Use Case and Task Models: Formal Unification and Integrated Development Methodology"
  • December 04, 2008 - Defence given by Yan Cheng on "A Multi-Panel QoS Control Communications Framework in Heterogeneous Networks".
  • December 01, 2008 - Defence given by Aleksey Izmailov on "A Fully Automated Real-time Eigenface-based Face Recognition System ".
  • November 24, 2008 - Defence given by Louis Charbonneau on "Evolution of an Artificial Market and its use to Predict Future Stock Prices"
  • November 26, 2008 - Defence given by Nasim Farsiniamarj on "Combining Integer Programming and Tableau-based Reasoning: A Hybrid Calculus for the Description Logic SHQ"
  • July 23, 2008 - Seminar given by Prof. Dr. Forbrig (University of Rostock, Germany),"Task and Dialog Specifications for UI Development".
  • April 28, 2008 - Deadline for receiving poster abstracts for Poster session at AI 2008.
  • February 12, 2008 - Dr. John Plaice, from School of Computer Science and Engineering The University of New South Wales, Australia will give a public lecture on TransLucid, the Cartesian Programming Language at 10:00 AM in room EV003.309.

 

Concordia University