FROM FEATURES TO PREDICATES OR FROM SYMPTOMS TO SYNDROMES IN SUPERVISED PATTERN RECOGNITION
Wednesday, May 15th, 11:00am, EV3.309
The present talk summarizes the recent advances in the discovery of empirical regularities by solving supervised pattern recognition problem when binary features are used in pattern descriptions. A typical example with binary features would be a medical diagnosis based on the presence or absence of a number of symptoms. Mathematical models used are based on learning Boolean formulas. Boolean formulas are expressed as conjunctions and are called non-reducible descriptors. They correspond to syndromes in medical diagnosis. A combinatorial procedure for construction of non-reducible syndromes is given. Non-reducible syndromes are extended as generalized non-reducible syndromes. Decision rules and feature selection problem are discussed. This approach is illustrated with applications for recognition of Arabic numerals in different graphical representations and for recognition of QRS complexes in electrocardiograms.
Prof. Ventzeslav Valev, PhD, Dr. of Math. Sci. He obtained M.Sc. Degree from the Wroclaw University of Technology, Wroclaw, Poland, in Computer Science, and M.Sc. Degree from the University of Wroclaw, Wroclaw, Poland, in Mathematics. He obtained Ph.D. Degree in Computer Science from the Dorodnicyn Computing Centre of the Russian Academy of Sciences in Moscow in 1979 and Doctor of Mathematical Science Degree in the field of Mathematical Informatics from the Institute of Mathematics and Informatics of the Bulgarian Academy of Sciences, Sofia, Bulgaria, in 1995, where he was elected Full Professor in 2002. In 2010 Dr. Valev was elected Associated Member of the Institute of Mathematics and Informatics, Bulgarian Academy of Sciences.
Dr. Valev had appointments at the University of Iowa, at the University of Medicine and Dentistry of New Jersey, and at Saint Louis University. Dr. Valev had also appointments in Germany, Turkey, Cyprus, Poland, Bulgaria, Saudi Arabia, and Oman. Dr. Valev is author of more than 50 papers published in Pattern Recognition, Pattern Recognition Letters, International Journal on Machine Graphics & Vision, Critical Reviews in Biomedical Engineering, Lecture Notes in Computer Science (LNCS), and in proceedings of many international conferences. Since 1998 Dr. Valev is a Fellow of the International Association for Pattern Recognition (IAPR).
MYSTERIES OF SEARCH TREES
May 14, 2013, 11:00am, H 767
The search is one of the most basic and most important data structures in computer science. It lies behind all modern database systems and has many other applications. Although the history of this data structure extends back more than fifty years, we still do not know everything about it. This talk will explore new ideas that lead both to simpler kinds of search trees and to a better analysis of their efficiency.
Robert E. Tarjan is the James S. McDonnell Distinguished University Professor of Computer Science at Princeton University and a Visiting Researcher at Microsoft Research. He is an expert in the design and analysis of data structures and graph algorithms. A member of the U.S. National Academy of Sciences and of the U.S. National Academy of Engineering, he was awarded the Nevanlinna Prize in 1982, and, with John Hopcroft, the Turing Award in 1986.
NONDETERMINISM IN THE ABSTRACT TILE ASSEMBLY MODEL
Monday, May 13 , 2013, 10:00am, EV 3.309
Researchers have shown that self-assembly of tile-like DNA structures can be used for nanoscale computations. The abstract Tile Assembly Model (aTAM), proposed by Winfree in 1998, is a simple mathematical abstraction of DNA tile self-assembly. The aTAM is extensively studied in the past 15 years. In this talk, I will give a brief overview of complexity results for the aTAM, show how allowing nondeterminism can increase the power of the aTAM even in self-assembling a shape deterministically, and finally discuss a number of open questions.
Ehsan Chiniforooshan received his M.Sc. from Sharif University of Technology, advised by Rouzbeh Tusserkani, and his Ph.D. from the University of Waterloo under the supervision of Naomi Nishimura. He has worked on problems in Combinatorics, Graph Theory, Data Structures, and Self-Assembly, and is currently a Software Engineer at Google.
FUNDAMENTAL PHYSICAL CAPABILITIES AND LIMITATIONS IN COMMUNICATION AND COMPUTING
Tuesday, March 12, 2013, 11:00 a.m., EV 2.260
This talk is a review of fifty years of research focused on revealing the ultimate capabilities of physical systems, on one hand, and their fundamental limitations, on the other, in communication and computing. The following topics are considered.
1. Limits on information transmission by physical agents. Capacity and energy efficiency of photon and corpuscular channels. General bound on minimum energy per information unit.
2. The effect of irreversibility of quantum measurements. Entropy defect and “accessible” information.
3. POVM vs. von Neumann measurements in finite- and infinite-dimensional Hilbert spaces.
4. The maximum speed of computing operations. The Mandelstam-Tamm and Margolus-Levitin bounds. The minimum operation time of quantum gates. The unified tight bound on the rate of computation.
5. Thermodynamic cost of reversible computing. The minimum energy dissipation per computational step.
6. Equivalence relation between information and work. Heat-to-work conversion by use of one-particle and two-particle information.
Dr. Lev B. Levitin received the M.S. degree in physics from Moscow University, Moscow, USSR, in 1960 and the Ph.D. degree in physical and mathematical sciences from the USSR Academy of Sciences in 1969. Since 1982, he has been with the College of Engineering, Boston University, and since 1986 has been Distinguished Professor of Engineering Science with the Department of Electrical and Computer Engineering at Boston University. He has published over 190 papers, presentations, and patents. His research areas include information theory; quantum communication systems; physics of computation; quantum computing; quantum theory of measurements, mathematical linguistics; theory of complex systems; coding theory; theory of computer hardware testing, reliable computer networks, and bioinformatics. He is a Life Fellow of IEEE, a member of the International Academy of Informatics and other professional societies.
TOWARDS PERSONALIZED MEDICINE
Friday, March 8, 2013, 10:00 a.m., EV 2.184
The mapping of the human genome, completed in 2003 after 13 years of collective efforts at an estimated cost of 3 billions dollars, had an immense impact on biomedical research. Earlier this year, Life Technologies presented a small device to sequence an entire human genome in a day for less than $1,000, effectively making personal genomics accessible to most laboratories.
Current databases are typically designed as single organism databases and are not readily amenable to complex system-wide research across multiple species. In this talk, I will present the unique challenges of clinical and biological big data, and review the state-of-the-art in genomics data warehousing.
I will also present versatile classification integration and reclassification methods that can combine existing classifications without requiring access to the raw data, and will discuss how they can be leveraged to combine clinical data with omics databases: more accurate predictors for diseases risks and pathologies, integrated with personal omics data, could ultimately lead to early diagnostics and personalized drugs to treat patients given their personal genetic background.
Dr. Thomas Triplet is a postdoctoral researcher at the Centre for Structural and Functional Genomics and the Department of Computer Science and Software Engineering at Concordia University. He is also a member of the professional Ordre des Ingénieurs du Québec. He earned his engineering diploma and Master's degree in Computer Science and Engineering, with distinctions, in 2007 at the French National Graduate School of Engineering ENSICAEN. He completed his Ph.D. in bioinformatics after two years under the supervision of Prof. Peter Revesz at the University of Nebraska-Lincoln, USA, where he was a recipient of an ISEP and a Milton E. Mohr fellowships. His main research interests include the integration and mining of clinical and biological big data for personalized medicine, as well as the visualization and the automated analysis of those data using machine learning.
HIGHER-ORDER MULTIDIMENSIONAL PROGRAMMING
Tuesday, March 5, 2013, 11:00 a.m, EV 3.309
In 1975, William W. Wadge and Edward A. Ashcroft introduced the language Lucid, in which the value of a variable was a stream. The successors to Lucid took two paths.
The first path, taken by Lustre, was to restrict the language so that a stream could be provided with a timed semantics, where the i-th element of a stream appeared with the i-th tick of the stream's clock, itself a Boolean stream. Today, Lustre is at the core of the Scade software suite, the reference tool for avionics worldwide.
The second path was to generalize the language to include multidimensional streams and higher-order functions. The latest language along this path is TransLucid, a higher-order functional language in which variables define arbitrary-dimensional arrays, where any atomic value may be used as a dimension, and a multidimensional runtime context is used to index the variables.
The presentation will focus on the key problems pertaining to design, semantics and implementation of Lustre and TransLucid, and show how the two paths are being brought back together in the TransLucid project.
Dr. John Plaice (BMath 1979, University of Waterloo, Canada; PhD 1984, Grenoble Institute of Technology, France; Habilitation 2010, University of Grenoble) is Adjunct Associate Professor at The University of New South Wales, Sydney, Australia.
He wrote the first semantics and compiler for Lustre (Synchronous Real-Time Lucid), the core real-time programming language in Esterel Technologies' Scade Suite, the leading solution in Europe for developing embedded software meeting stringent avionics standards. Since then, he has developed numerous techniques for adaptation to context, in programming languages, software configuration, electronic documents and digital typography.
THE WORLD AT YOUR GEOMETRICKS
Friday, March 1, 2013, 10:00 a.m, EV 11.119
Digital geometry processing is a powerful tool used ubiquitously in increasingly many aspects of the digital world from games, movies to engineering, CAD, medicine and telepresence. It is also a relatively new and emerging field that in the last decade has developed a large set of new algorithms and techniques that are increasingly used in mainstream applications becoming more and more established in the digital world.
In this talk I will present some of my work in digital geometry processing with applications in modeling, deformation, novel view synthesis, telepresence and teleconferencing. I will show how pure geometric algorithms can be used to solve complex problems and I hope I will convince you of the relevance of digital geometry processing in todays digital world and hopefully will inspire new students to use and study some of these techniques.
Dr. Tiberiu Popa is a postdoctoral researcher in the Computer Graphics Lab at ETH Zurich. He completed his Bachelor of Mathematics in 2001 and Master of Mathematics in 2004, both at the University of Waterloo in Canada. In 2010, Tiberiu obtained a PhD from the University of British Columbia in Canada that received the Alain Fournier annual thesis award, and then started at ETH in January 2010. Since 2011 he is coordinating the research efforts of the BeingThere center Zurich, a research collaboration between ETH University of North Carolina and Nanyang University of Singapore on next generation Telepresence systems. Tiberiu’s main research interests are in digital geometry processing, spatial-temporal surface acquisition, free viewpoint video, Telepresence, etc.
A HYBRID FRAMEWORK FOR THE SYSTEMATIC DETECTION OF SOFTWARE SECURITY VULNERABILITIES IN SOURCE CODE
Tuesday, February 12, 2013, 15:00, EV 3.309
In this talk, we address the problem of detecting vulnerabilities in software where the source code is available, such as free-and-open-source software. In this, we rely on the use of security testing conducting various analyses. Either static or dynamic analysis can be used for security testing approaches, yet both analyses have their advantages and drawbacks. In fact, while these analyses are different, they are complementary to each other in many ways. Consequently, approaches that would combine these analyses have the potential of becoming very advantageous to security testing and vulnerability detection. This has motivated the research work discussed in this talk.
For the purpose of security testing, security analysts need to specify the security properties that they wish to test software against for security violations. Accordingly, a security model extending security automata is introduced to allow such specifications. For the purpose of profiling the software behavior at run-time, various code instrumentations are needed at different program points. We hence explore this subject and introduce a compiler-assisted profiler that is based on the pointcut model of Aspect-Oriented Programming (AOP) languages. Thirdly, we explore the potentiality of static analysis for vulnerability detection and illustrate its applicability and limitations with an additional focus on reachability analysis.
Finally, we introduce a more comprehensive security testing and test-data generation framework that provides further advantages over the mere static-analysis model. The framework combines the power of static and dynamic analyses, and is used to generate concrete data, with which the existence of vulnerability is proven beyond doubt, hence mitigating major drawbacks of static analysis, namely false positives. We further illustrate the feasibility of the elaborated frameworks by developing case studies for test-data generation and vulnerability detection on various size and complexity software.
Dr. Aiman Hanna received his Bachelor in Engineering from Assuit University, Egypt in 1988, Master’s in Computer Science and Ph.D in Computer Science from Concordia University, Canada in 2000 and 2012. He worked as a Senior Software Engineer and Team Leader for more than eight consecutive years for some of the largest firms in Canada (BCE and CGI). He is currently a full-time professor at Concordia University where he has been working for nearly 22 years. His research interests include software security, secure software engineering, vulnerability detection, software security hardening, formal automatic specification, language technologies, formal semantics, and code analysis techniques. For his research work, Dr. Hanna was the recipient of the 2009 OCTAS Award from the Fédération de l'Informatique du Québec (FIQ). He has also been the recipient of the Faculty of Engineering and Computer Science Teaching Excellence Award in 1999, and Concordia University CCSL Teaching Excellence Award in 2001. Dr. Hanna holds a Professional Engineering License and is a member of Professional Engineers Ontario (PEO).
MATCHINGS, PERFECT MATCHINGS, AND THE LOVASZ-PLUMMER CONJECTURE
Friday, February 8, 2013, 10:00 a.m., EV 3.309
A matching in a graph is simply a set of edges, no two of which share an endpoint. Matchings are fundamental not only to graph theory, but to computer science in general, as they can be used to model a broad class of problems in which we must pair up certain objects according to a set of basic constraints. In this talk I will discuss perfect matchings in bipartite graphs -- it is immediately clear how such objects model problems in which we must match objects in one set to objects in another set, with nothing excluded, for example when we need to match network traffic requests to servers. But these perfect matchings also have less obvious applications in areas such as complexity theory and mathematical chemistry.
In the 1970s, Lovász and Plummer conjectured that the number of perfect matchings in a bridgeless cubic graph is exponential compared to its size. This was proven by Voorhoeve for bipartite graphs and by Chudnovsky and Seymour for planar graphs. I will present an outright proof of the conjecture that uses elements of both earlier proofs, as well as properties of the perfect matching polytope.
This is joint work with Louis Esperet, Frantisek Kardos, Daniel Kral, and Sergey Norin.
Andrew King is a PIMS Postdoctoral Fellow working with Pavol Hell and Bojan Mohar at Simon Fraser University. He received his Ph.D. from the School of Computer Science at McGill University under the supervision of Bruce Reed, writing his dissertation on the subject of colouring and decomposing claw-free graphs. Following this he spent two years as an NSERC Postdoctoral Fellow with Maria Chudnovsky at Columbia University's Industrial Engineering and Operations Research Department. His main research interests include graph algorithms, bounding the chromatic number, graph clustering, and structural decomposition.
ENERGY EFFICIENT CLOUD NETWORKING: STATE OF THE ART, OPPORTUNITIES AND CHALLENGES
Friday, January 25, 2013, 10:00 a.m., EV 11.119
Cloud computing is a newly emerging paradigm that allows ubiquitous provisioning of software, platform and infrastructure services and enables offloading the local resources. Data centers, as the main hosts of cloud computing services, accommodate thousands of high performance servers and high capacity storage units. One of the major challenges in cloud computing is energy efficiency. Offloading the local resources increases the energy consumption of the transport network and the data centers although it is advantageous in terms of energy consumption of the end hosts. In this talk, I will present the existing solutions, opportunities and challenges in the design of the Internet backbone with data centers and energy-efficient delivery of the cloud services. A case study will follow by introducing Mixed Integer Linear Programming (MILP)-based provisioning models and heuristics to guarantee either minimum delayed or maximum power saving cloud services. Besides, I will extend the scope of my talk by tackling network-aware intra and inter-data center virtual machine placement with commitment to energy-efficiency. Furthermore, in conjunction with the advantages of the smart grid, I will also introduce the recent research results on the impact of Time of Use (ToU)-aware provisioning on the Opex of the network and data center operators. Opportunities and research challenges in this area including Wireless Sensor Network-based thermal monitoring of data centers, as well as the security and privacy issues will conclude the presentation as a part of the immediate research agenda.
Burak Kantarci is a postdoctoral fellow at the School of Electrical Engineering and Computer Science of the University of Ottawa. His research at UOttawa is being supervised by Prof Hussein Mouftah who also co-supervised his PhD thesis. Dr. Kantarci received the M.Sc. and Ph.D. degrees in Computer Engineering from Istanbul Technical University in 2005 and 2009, respectively, and he completed the major content of his PhD thesis at the University of Ottawa between 2007 and 2008. He was the recipient of the Siemens Excellence Award in 2005 for his contributions to the optical burst switching research. He has co-authored seventeen articles in established journals and forty seven papers in many flagship conferences, and he has contributed to five book chapters. He is a co-editor of the forthcoming book, Communication Infrastructures for Cloud Computing, which is to be published by IGI Global in 2013. He has been serving in the TPCs of Green Communication Systems Track of IEEE GLOBECOM and IEEE ICC conferences. Dr. Kantarci is a Senior Member of the IEEE, and a founding member of the IEEE ComSoc-Technical Sub-committee on Green Communications and Computing.
NAMED ENTITIES DETECTION AND ENTITY LINKING IN THE CONTEXT OF SEMANTIC WEB
Friday, December 7, 2012, 11:00 a.m., EV 3.309
Entity linking consists in establishing the relation between a textual entity from a text and its corresponding entity in an ontology. The main difficulty of this task is that a textual entity might be highly polysemic and potentially related to many different ontological representations. To solve this specific problem, various information retrieval techniques can be used. Most of those involve contextual words to estimate which exact textual entity have to be recognized. In this communication, we will explore the question of entity linking and the disambiguation problems it involves. We will describe how a detection and disambiguation resource built from Wikipedia encyclopaedic corpus can be used to establish a link between a named entity (NE) in a text and its normalized ontological representation from the semantic web.
Eric Charton (Ph.D. , M.Sc) is a researcher in the field of machine learning and natural language processing and their application to the semantic web. He had worked in various university labs in France (Laboratoire Informatique d'Avignon) and Québec (École Polytechnique de Montréal). His research work has been experimented in scientific evaluation campaigns like CoNLL or Ester and is publicly released in the NLGbAse ontology (www.nlgbase.org), and the Wikimeta Semantic Labeling tool (www.wikimeta.com).
Eric Charton is also the author of wide audience books related to Computer Science and Technology published by Pearson and Simon and Shuster Macmillan. He works currently as researcher at the Centre de Recherche Informatique de Montreal (CRIM) on a project related to the improvement of a search engine using semantic web techniques.
Friday, November 9, 2012, 14:00, EV 3.309
The problem of graph management is very important nowadays. In this talk we introduce the work of DAMA-UPC in graph management, some concepts about graph management and propose a technology for managing large graphs in an efficient way. The technology presented, DEX, has evolved towards a software commercialized and evolved by Sparsity Technologies, www.sparsity-technologies.com, a spin out of UPC. In the talk we show results for the technology compared to other technologies that are capable of solving the same graph problems, showing better performance and scalability for DEX for large graphs in single processor hardware.
Josep Lluis Larriba Pey is the director of DAMA UPC Barcelona, Spain, and his interests include performance, exploration and quality in data management, focusing particularly on large data volumes.
Monday, November 5, 2012, 15:00, EV 3.309
Graphics processing units (GPUs) on commodity video cards were originally designed towards the needs of the 3-D gaming industry for high performance, real-time graphics. They have become powerful co-processors to the CPUs. The top of the line Nvidia GPUs for computation have 512 cores in one chip. Scientists and engineers from many disciplines are exploring various ways to use this massive amount of parallel computation. This presentation gives an introduction of GPU hardware and programming, and a survey of some applications.
Dr. Ming Ouyang has a B.S. degree in Computer Science from National Taiwan University, an M.S. degree in Computer Science from Stony Brook University, and a Ph.D. degree in Computer Science from Rutgers University, under the supervision of Dr. Vasek Chvatal. He joined the Computer Engineering and Computer Science Department of University of Louisville as an Assistant Professor in 2007.
Thursday July 19, 3:00-4:00PM, EV 3.309
Computational geometry is a field that deals with algorithmic aspects
of geometric problems. Geometric problems pervade a broad spectrum of
disciplines, with cartography, computer vision, wireless communications,
robotics, and computer-aided design and manufacturing representing but
a few. In computational geometry, we study geometric problems at various
levels of abstraction from the real-life applied problems from which they
may be drawn: sometimes, we may work to provide practical solutions that
are as efficient and as accurate as possible for immediate use; other
times, we may work to establish clear bounds on the complexity of more
theoretical abstract questions whose practical applicability is not yet
apparent. And yet the theoretical tools that we develop today may affect
the practice of tomorrow.
This talk will focus on a particular branch of computational geometry,
that of geometric reconfigurations. Geometric reconfigurations abound
in numerous applications from nano-selfassembly to movement of robot
arms to solving rubik's cubes or other mathematical puzzles. This talk
will explore some different types of geometric reconfiguration as well
as underlying computational techniques used to solve these problems.
No specific background is assumed.
Professor Souvaine's CV is available here.
Many interesting sketch-based modeling techniques have been developed over the recent years. However, these techniques are mostly suitable for creating simple and usually low quality shapes. To address this shortcoming, two research projects have been explored in my group. In this talk, I will present these sketch-based projects ( NaturaSketch and PUPs) for modeling detailed 3D shapes. NaturaSketch is an image assisted sketch-based system for creating and deforming subdivision and multiresolution surfaces. PUPs (Partition of Unity Parametrics) is a natural extension of NURBS that allows us to support high-quality sketched features.
Faramarz F. Samavati is a Professor and Associate Head (Graduate Director) of the Department of Computer Science at the University of Calgary. His research interests include Computer Graphics, Visualization and 3D imaging. Dr. Samavati has published more than 90 papers, one book and filed 2 patents. Currently, he is an Associate Editor of Computer & Graphics (Elsevier's journal) and a Network (principal) Investigator of GRAND NCE (Networks of Centres of Excellence of Canada in Graphics, Animation and New Media) in where he is also the Lead of SKETCH project.
Tuesday May 1, 2012, 11am-12noon, EV3.309
We wanted to gain a detailed empirical understanding of how
researchers come across information serendipitously, grounded in
real-world examples. To gain this understanding, we asked 28
researchers from a broad cross-section of disciplines to discuss in
detail memorable examples of coming across information serendipitously
from their research or everyday life. We found that although the
examples provided were varied, they shared common elements
(specifically, they involved a mix of unexpectedness and insight and
led to a valuable, unanticipated outcome). These elements form the
core of 1) a descriptive model of serendipity and 2) a framework for
subjectively classifying whether or not a particular experience might
be considered serendipitous and, if so, how serendipitous. In this
talk, we discuss this model and framework and the implications of our
findings on the design of interactive systems.
Dr. Stephann Makri is a Research Associate at University College
London Interaction Centre and is conducting research as part of a
£1.87m UK Research Council funded project (SerenA: Chance Encounters
in the Space of Ideas) which aims to understand how people come across
information 'serendipitously' and to design ubiquitous computing
systems based on this understanding.
Tuesday February 14, 2012, 10:00am, EV3.309
Adaptive Programming (AP) provides advanced code modularization for traversal related concerns in object-oriented programs. Computation in AP programs consists of (i) a graph-based model of a program’s class hierarchy, (ii) a navigation specification, called a strategy, and (iii) a visitor class with specialized methods executed before and after traversing objects. Despite the benefits of AP there are also limitations; hardcoded name dependencies between strategies and the class hierarchy as well as non-modular adaptive code (strategies and visitors). These limitations hamper adaptive code reuse and make composition and extension of adaptive code difficult.
To address these limitations we define "What You See Is What You Get" (WYSIWYG) strategies, constraints and Demeter Interfaces. WYSIWYG strategies guarantee the order of strategy nodes in selected paths simplifying the semantics of strategies and leading to more predictable behavior. Constraints provide a new mechanism that allows programmers to define invariants on the graph-based model of a program’s hierarchy thereby making programmer’s assumptions explicit and verifiable at compile time. Finally, Demeter Interfaces provide (i) an interface between the program’s class hierarchy and both strategies and visitors, (ii) statically verifiable constraints on the structure of a class hierarchy that implements a Demeter interface and (iii) the ability to parametrize adaptive code.
We further show that our results can be applied to other technologies that share similar properties --traversals of graph like structures using selector languages-- as Adaptive Programs, such as, XML processing, and discuss new future directions made possible because of the advantages introduced by Demeter Interfaces to AP.
After receiving a B.Sc. in Joint Mathematics and Computer Science at Imperial College, I joined the Illinois Institute of Technology (IIT) were I started on a PhD in Computer Science under the supervision of Dr. Morris Chang with a focus on memory management systems for the JVM. In 2001, after receiving my masters from IIT, I moved to Northeastern University were I completed my PhD in Computer Science under the supervision of Dr. Karl Lieberherr with a focus in Software Engineering and Programming Languages. For the past three years I have been working as a Software Developer Engineer at Amazon were I have been involved in the design, implementation, and, maintenance of an internally developed web framework and web services used by multiple teams to develop web sites and distributed business applications.
Thursday February 9, 2012, 10:00am, EV3.309
How do we know which Software Engineering (SE) practices produce the highest quality software systems on time and on budget?
In answer to this question, I will discuss my Empirical Software Engineering research program. This area of work interests me because validated empirical evidence on software practices allows us to understand which are successful and in which context they can be transferred to other projects. Adoption of evidence based SE practices increases the likelihood that a software product will be of high-quality, as well as on time and on budget. To generate validated hypothesis and more general theories, empirical SE requires the collection of evidence from multiple sources and the use of a variety of methods (triangulation). I have shown the fruitfulness of this approach in a number of empirical SE studies. For example, I examined the effects of distributed version control on developer organization and system architecture. In my dissertation, I performed a systematic and comparative evaluation of open source software peer review practices.
Peter C. Rigby is a postdoctoral researcher working with Dr. Robillard at McGill University in Montreal. He received his PhD from the University of Victoria for his examination of the peer review practices used by OSS projects. His PhD advisers were Dr. Storey and Dr. German. He received a Bachelor degree at the University of Ottawa in Software Engineering and has taught two third year Software Engineering courses (Software Maintenance and HCI). His primary research interest is in Mining Empirical Software Engineering data to understand the how people collaborate to design and develop large, successful software systems. His three current research areas are: informal API documentation quality, lightweight industrial review techniques, and the affect distributed version control is having on developer collaboration. Please see helium.cs.uvic.ca for more details.
Wednesday, February 8, 2012, 13:00, EV 11.705 (Hexagram-Concordia Research-Creation Brown Bag Series)
This presentation focuses on an experimental project featuring a minimalist embodied agent embedded in real life. The agent adapts to its environment through a single perceptual modality by relying on a machine learning approach. The goal of this project is to start experimenting with interactive learning agents as ways of creating meaningful aesthetic experiences. By appealing to different concepts in cultural studies, science and technology studies, cognitive science, phenomenology and performativity theory, I build the argument that the embodied interaction of the agent with its world becomes the site of an aesthetic experience and the production of meaning. Furthermore, I show how its connectionist structure and its learning behavior augment the world by extending it with a brainlike phenomena that couples with it.
Hexagram-Concordia: Research-Creation Brown Bag Series
Graduate students in all disciplines are invited to present their practice and/or research and engage with active peers on topics surrounding research-creation, ontological perspectives on art, and how artistic practices create knowledge, among others. This series of student-organized talks, seminars, and roundtable discussions has been initiated in an effort to strengthen graduate student participation in Hexagram as a platform for furthering exchange and collaboration. A regular calendar of talks through Winter 2012 is currently being organized. Please look out for more announcements related to this series in the coming months.
For more information contact the organizers via Harry Smoak.
Tuesday February 7, 2012, 10:00am, EV3.309
In this talk I will cover the results of my research work on the detection of design pattern instances and the identification of refactoring opportunities in object-oriented systems. The knowledge of the design pattern instances implemented in a software system provides a better understanding of its overall architecture and the design decisions made during its evolution, facilitates its extension to new requirements through pattern extension mechanisms and improves the communication among its developers through a common vocabulary of design concepts. However, finding the implemented pattern instances in a software system is not a trivial task, since they are usually not documented, they do not follow the standard naming conventions, their implementation may deviate from their standard description and their manual detection is prohibitive for large systems. To overcome all these difficulties, we have proposed a technique for the structural detection of design pattern instances that is based on a graph similarity algorithm. The proposed technique is scalable to large systems, robust to pattern deviations, highly accurate and easily extensible to new pattern definitions. According to several studies, maintenance occupies the largest percentage (even 90%) of the total software development cost. This is due to the fact that a software product should constantly evolve by providing new features, bug fixes, performance improvements, and integration of novel technologies in order to remain competitive and diachronically successful.
Despite the major importance of software maintenance, the resources invested by software companies on preventive maintenance (i.e., maintenance aiming to improve maintainability and avoid future design problems) are very limited (lower than 5% of the total maintenance cost), since the manual and human-driven inspection of source code requires tremendous effort and leads to long-term benefits that do not add immediate value to the software product. As a result, there is a clear need for supporting and automating the preventive maintenance process with tools. To this end, we have developed techniques that resolve major design problems by identifying and suggesting appropriate refactoring opportunities. This refactoring-oriented approach provides a complete solution for preventive maintenance (in contrast to existing approaches focusing only on the detection of design problems) by covering all distinctactivities of the refactoring process. This includes the application of the suggested refactoring solutions in a way that preserves program behavior and a ranking mechanism based on their impact on design quality allowing the prioritization of maintenance effort on parts of the program that would benefit the most.
Nikolaos Tsantalis received his BS, MS and PhD degrees in applied informatics from the University of Macedonia, Greece, in 2004, 2006 and 2010, respectively. He is currently a Postdoctoral Fellow at the Department of Computing Science, University of Alberta, Canada. His research interests include design pattern detection, identification of refactoring opportunities, and design evolution analysis. He has developed tools, such as the Design Pattern Detection tool and JDeodorant, which have been widely acknowledged by the software maintenance community. He is a member of the IEEE and the IEEE Computer Society.
Tuesday January 31, 2012, 10:00am, EV3.309
The stringent performance constraints on application software continue to grow, particularly in the embedded and mobile computing domains. Conventional processors and on-chip communication architectures do not provide the necessary throughput for meeting these constraints. There is a widely acknowledged need for Application-Specific Processors (ASPs), as well as fast and scalable on-chip communication technology to meet the performance needs of next generation embedded applications. In this talk, I will present my work on two distinct, yet complementary, design technologies for embedded computing platforms. The first design technology is automatic generation of ASPs from software. I will discuss fast, scalable and controllable algorithms to automatically create a pipelined ASP core from a given application C code. The performance and resource usage of the generated ASP is comparable to manual hardware design, thereby providing much faster computing than a general purpose processor, while still being programmable. The second design technology is Optical Network-on-Chip (ONoC), which provides significantly higher communication bandwitch than conventional communication architectures in multi-core systems. I will present algorithms for optimizing the mapping of communication channels to optical waveguides, which is a key problem in ONoC-based design. Finally, I will discuss my future research directions that aim to build a comprehensive framework for design and optimization of ASP and ONoC based embedded computing platforms.
Bio:Jelena Trajkovic is a ReSMiQ post-doctoral scholar at Ecole Polytechnique de Montreal and is affiliated with the Center for Embedded Computer Systems at UC Irvine. She received her PhD from the University of California, Irvine in 2009. She holds an M.S. in Information and Computer Science, from the University of California, Irvine (2003), and a Dipl. Ing. degree in electrical engineering, from the School of Electrical Engineering at University of Belgrade,Serbia (2000). Her research interests include novel architectures and design automation methods for embedded systems, as well as design and modeling of optical networks-on-chip for many-core platforms.
Thursday January 26, 2012, 10:00am, EV11.119
Today's world economy relies heavily on large-scale software infrastructures that facilitate the seamless integration of information and enable the smooth interaction of heterogeneous systems. The development of such software infrastructure requires the investment of tremendous amounts of resources. For instance, the United States alone spends around $250B annually on software development projects. However, only a very small portion of these investments are actually fruitful and at the end, over 80% of these projects do not meet the set expectations. In this talk, I will capitalize on three main pivotal characteristics for software development, namely agility, quality of products and systems' scale that can be influential on the success or failure of software development endeavors. I will discuss how the weaving of quality engineering techniques into the software product line engineering paradigm can result in effective impacts on the software development process. The details of a decision support platform incorporating semantic Web, natural language processing and visualization techniques for enhancing the quality of software product line engineering artifacts and its observed empirical impact on software developers will be further discussed.
Bio:Ebrahim Bagheri is currently an Assistant Professor at the AU School of Computing and Information Systems and a Visiting Professor with University of British Columbia. He also enjoys an IBM CAS Faculty Fellowship and an Honorary Research Associate appointment at the University of New Brunswick. Ebrahim specializes in topics ranging from the meta-modeling of complex interconnected systems to collaborative information systems design. Currently, his research focuses on two areas namely, quality engineering for software product lines; and Knowledge management for enterprise engineering. His work on collaborative modeling is one of a kind in providing tools and techniques for collaborative risk management and quality engineering. He has extensively published over 80 papers in top-tier journals and conferences and has served as Program Committee Chair and Member of several international conferences and workshops.
Tuesday January 24, 2012, 10:00am, EV11.119
Use case modeling has become a part of mainstream software engineering practice as a key activity in conventional software development processes. When written correctly, the use case model has the potential to drive all subsequent development work and serves as a reference point for maintenance and documentation purposes. Writing effective and well-structured use cases, however, is a difficult task which requires a thorough understanding of the concepts and techniques involved. Current practice has shown that it is easy to misuse them or to make mistakes that can render them useless at best and result in the propagation of incorrect requirements in many cases.
In this presentation, I survey a number of best practices, praxis-proven templates and guidelines for writing effective use cases. I continue with a review of my past and current research in the area of use case semantics, test case generation, and merging of use case models. I then examine their interrelation with non-functional requirements such as user interface and business transaction requirements. I conclude by presenting a vision of a unified model for software requirements which is followed by a discussion of anticipated research in the area of requirements engineering.
Bio:Dr. Sinnig is a Senior Consultant in Application Security at the Desjardins Technology Group. He holds a PhD in Computer Science from Concordia University and completed his post-doctoral tenure at the University of Rostock (Germany). His research interests lie in software engineering and human-computer interaction, with a particular focus on unifying theories and models that can bridge both disciplines. Dr. Sinnig is a co-author of the ISO 27034 standard on application security and a member and officer of the IFIP 13.2 working group on Methodologies for User-Centered Systems Design. He has held various awards and scholarships including the NSERC PGS and PDF awards and received the Concordia University Doctoral Prize in Engineering and Computer Science for the 2009 academic year.
Tuesday, January 17, 10:00am, EV3.309
A self-adaptive system changes its behaviour in response to stimuli from its execution and operational environment. As software is used for more pervasive and critical applications, support for self-adaptation is increasingly seen as vital in avoiding costly disruptions for repair, maintenance and evolution of systems. However, the wider use of self-adaptive systems in a variety of domains also leads to more challenges in designing and developing them. Self-adaptation may result in changes to some functionality, algorithms, or system parameters, as well as to the system's structure or any other system aspect. Moreover, an autonomic self-adaptive system has intrinsic intelligence that may help it reason about situations where autonomous decision making is required
In this talk, I briefly survey some of my past, ongoing and future research that strives to meet the challenge of developing autonomic self-adaptive systems. The talk spans over multiple projects and covers: expressing self-* requirements; modeling autonomic systems; developing software-verification mechanisms for self-adaptive systems; handling uncertainty in self-adapting behaviour; knowledge representation and reasoning for cognitive systems; and awareness.
Bio:Dr. Emil Vassev received his M.Sc. in Computer Science (2005) and his Ph.D. in Computer Science (2008) from Concordia University, Montreal, Canada. Currently, he is a research fellow at Lero (the Irish Software Engineering Research Centre) at University of Limerick, Ireland where he is: 1) leading the Lero participation in the ASCENS European FP7 project; 2) leading the Lero's joint project with ESA on Autonomous Software Systems Development Approaches; and 3) participating in the FastFix European FP7 project and in the MODEVO project. Dr. Vassev's current research focuses on knowledge representation and self-awareness for self-adaptive systems. More broadly, his research interests are in software development methodologies for developing autonomic systems. Dr. Vassev holds a USA NASA Patent on "Method of Improving System Performance and Survivability through Self-sacrifice".
Friday, December 2, 14:00, EV003.309
In this talk I briefly survey some of my previous research and then even more briefly extrapolate as to future extensions of this work. I will talk about improving Automatic Speech Recognition (ASR) for speakers with speech disabilities by incorporating knowledge of their speech production. This involves the acquisition of the TORGO database of disabled articulation which demonstrates several consistent behaviours among speakers, including predictable pronunciation errors. Articulatory data are then used to train augmented ASR systems that model the statistical relationships between the vocal tract and its acoustic effluence. I show that dynamic Bayesian networks augmented with instantaneous articulatory variables outperform even discriminative alternatives. This leads to work that incorporates a more rigid theory of speech production, i.e., task-dynamics, that models the high-level and long-term aspects of speech production. For this task, I devised an algorithm for estimating articulatory positions given only acoustics that significantly outperforms the former state-of-the-art. Finally, I present ongoing work into the transformation of disabled speech signals in order to make them more intelligible to human listener and I conclude with some thoughts as to possible paths we may now take.
Bio:Frank Rudzicz received his PhD in Computer Science from the University of Toronto in 2011, his Master’s degree in Electrical and Computer Engineering from McGill University in 2006 and his Bachelor’s in Computer Science at Concordia University in 2004 He is the recipient of a MITACS Accelerate Canada award, a MITACS Industrial Elevate award, and an NSERC Canada Graduate Scholarship. His expertise includes parsing in natural language processing, acoustic modelling, multimodal interaction, and speech production.
Monday, October.3, 2011, 10:30am, EV003.309 (open to all)
View high-resolution version of poster here.
Wednesday, July 6, 2011, 13:00, EV003.309
This investigation is concerned with queueing model for the performance analysis of manufacturing system with standbys, working vacation and server breakdown. As soon as an operating unit fails, it is immediately replaced by a standby unit for the smooth running of the manufacturing system. When there is no failed unit in the system, the server goes on vacation; in the meanwhile, the server performs some work and is called on working vacation. The life time and the repair time of the manufacturing units are assumed to be exponentially distributed. The matrix geometric method is used to evaluate various performances such as the expected number of failed units and the expected number of operating units in the manufacturing system, machine availability, operating utilization, etc.. The cost function is established to maximize the gain. The sensitivity analysis is also carried out to examine the effect of different parameters on various system characteristics.
Professor G.C. Sharma was Ex-Professor and Head, Department of Mathematics and Computer Science, Institute of Basic Science, Agra. He was the founder director of “Seth Padam Chand Jain Institute of Commerce, Business Administration and Economics”, “Institute of Vocational Education” and “Institute of Engineering and Technolog” of Dr. B. R. Ambedkar University, Agra. More than 50 students received their Ph. D. Degree under his supervision. More than 150 research papers and 20 books are to his credit. His area of research includes queueing and reliability models, computational fluid dynamics, Bio-informatics, etc. At present he is actively engaged in the interdisciplinary research of modeling of humane diseases namely HIV, TB, Malaria, cancer, etc.
Tuesday, July 5, 2011, 13:00, EV003.309
Wireless communication networks need utilization of their channels to achieve a desired goal. The problem of allocating the channels in an efficient manner in order to get maximum output is of vital importance. In the present investigation, optimal channel allocation scheme in cellular radio system is suggested by reserving a specific number of channels for handoff calls to give them priority in comparison to new calls. The provision of sub-rating and buffer is made. The calls are assumed to arrive in Poisson fashion whereas the service times along with cell residence time are exponentially distributed. To establish steady state indices, product method is employed by balancing the in-flow and out-flow rates. Runge-Kutta (R-K) technique is used to obtain the solution of the system of transient equations. Various performance indices are also established in terms of transient probabilities. The sensitivity analysis is also carried out to examine the effects of various system parameters on the performance measures.
Dr Madhu Jain is a faculty in the Department of Mathematics, Indian Institute of Technology Roorkee, India. She is recipient of two gold medals at M. Phil. level. There are more than 200 research publications in reputed journals including Applied Mathematical Modeling, Applied Mathematics and Computation, Computers and Operations Research, Computers in Biology and Medicine, etc. She was conferred Young Scientist Award, Department of Science and Technology (India) and Career Award, University Grant Commission (India). Her current research interest includes queueing theory, stochastic models, software reliability, wireless communication, Bio-informatics, etc.
Monday, July 4, 2011 from 10:00 - 11:00am in EV3.309
Redundant Arrays of Independent Disks (RAID) systems have come into widespread use because of their enhanced I/O bandwidths, large capacities, and low cost. However, the increasing demand for greater array capacities at low cost has led to the use of arrays with larger and larger number of disks, which increases the likelihood of the concurrent occurrence of two or more random disk failures. Hence the need for RAID systems to tolerate two or more random disk failures without compromising disk utilization. In this talk, we will present a novel algorithm based on the perfect 1-factorization of the complete graphs KP and K2P – 1 for placing data and parity in two-disk fault-tolerant arrays with (P – k) and (2P – 1 – k) disks respectively, where P is a prime number and k ≥ 1. Furthermore, we determine the fraction of space used for storing parity in such arrays and show that this fraction has the optimal value when k = 1.
Narsingh Deo is the Millican Chair Professor of Computer Science and Director of the Center for Parallel Computation at University of Central Florida. A Fellow of the IEEE and a Fellow of the ACM, Prof. Deo has authored four textbooks and over 200 refereed papers on graph theoretic algorithms, combinatorial computing, discrete optimization, and parallel computation.
Tuesday, June 7, 2011 from 10:30-11:30AM in EV003.309
While Software Engineering traditionally has not been widely popular amongst the industrial software development practitioners,its maturity and need is being felt by these practitioners all the more. Is it that the field of Software Engineering is stagnating? An analysis of the paper presentation by topic in the recently concluded ICSE 2011 conference shows the disturbing imbalances in the priorities of the Software Engineering research being pursued. In this talk, which was delivered at the ICSE'2011 conference, we highlight the role played by Grand Challenge initiatives and the need for it in contemporary software engineering research.We highlight these grand challenge opportunities in six areas of advanced software engineering - more so in the context of Cloud Computing, ubiquitous networked smart devices, social networks, rapid software development, compositionality of components and services and secure testing and validation.
T.S. Mohan works at Infosys Technologies E&R’s ECom Research Lab as a Principal Researcher. His research interests include distributed systems, high performance computing, cloud and grid as well as software architecture and Software engineering. He has over 22 years experience in the academia and industry. T.S. Mohan holds a Master and PhD in computer science from the Indian Institute of Science, Bangalore where he worked for about a decade before moving into the industry. He was a young visiting scientist in the Lab for Computer Science, MIT in 1988 and a visiting scientist in NEC Research Institute, Princeton in the summer of 1994. He pursued his entrepreneurial interests in Bangalore in advanced computing technologies for about 6 years before joining Infosys. He is the Co-Chair of the Software Engineering in Practice Track of International Conference on Software Engineering (ICSE) 2011 Conference as well as Co-Chair of the International Workshop on Software Engineering for Cloud Computing, 2011 and International Workshop on Future of Software Engineering in/for Cloud Computing 2011 (FoSEC 2011).
Wednesday, March 16, 2011 from 10:00-11:00am in EV2.260
With the advent of digital pathology, imaging scientists have begun to develop computerized image analysis algorithms for making diagnostic (disease presence), prognostic (outcome prediction), and theragnostic (choice of therapy) predictions from high resolution images of digitized histopathology. One of the caveats to developing image analysis algorithms for digitized histopathology is the ability to deal with highly dense, information rich datasets; datasets that would overwhelm most computer vision and image processing algorithms. Over the last decade, manifold learning and nonlinear dimensionality reduction schemes have emerged as popular and powerful machine learning tools for pattern recognition problems. However, these techniques have thus far been applied primarily to classification and analysis of computer vision problems (e.g., face detection). In this paper, we discuss recent work by our group in the application of manifold learning methods to problems in computer aided diagnosis, prognosis, and theragnosis of digitized histopathology. In addition, we discuss some exciting recent developments in the application of these methods for multi-modal data fusion and classification; specifically the building of meta-classifiers by fusion of histological image and "omics" signatures for prostate and breast cancer outcome prediction.
Dr. Anant Madabhushi is the Director of the Laboratory for Computational Imaging and Bioinformatics (LCIB), Department of Biomedical Engineering, Rutgers University. Dr. Madabhushi received his Bachelors Degree in Biomedical Engineering from Mumbai University, India in 1998 and his Masters in Biomedical Engineering from the University of Texas, Austin in 2000. In 2004 he obtained his PhD in Bioengineering from the University of Pennsylvania. He joined the Department of Biomedical Engineering, Rutgers University as an Assistant Professor in 2005. He was promoted to Associate Professor with Tenure in 2010. He is also a member of the Cancer Institute of New Jersey and an Adjunct Assistant Professor of Radiology at the Robert Wood Johnson Medical Center, NJ. Dr. Madabhushi has authored over 110 peer-reviewed publications in leading international journals and conferences. He has one patent, 9 pending, and 5 provisional patents in the areas of medical image analysis, computer-aided diagnosis, and computer vision. He is an Associate Editor for IEEE Transactions on Biomedical Engineering, IEEE Transactions on Biomedical Engineering Letters, BMC Cancer, and Medical Physics. He is also on the Editorial Board of the Journal Analytical and Cellular Pathology. He has been the recipient of a number of awards for both research as well as teaching, including the Busch Biomedical Award (2006), the Technology Commercialization Award (2006), the Coulter Phase 1 and Phase 2 Early Career award (2006, 2008), the Excellence in Teaching Award (2007-2009), the Cancer Institute of New Jersey New Investigator Award (2007, 2009), the Society for Imaging Informatics in Medicine (SIIM) New Investigator award (2008), and the Life Sciences Commercialization Award (2008). He is also a Wallace H. Coulter Fellow and a Senior IEEE member. His research work has received grant funding from the National Cancer Institute (NIH), New Jersey Commission on Cancer Research, the Society for Imaging Informatics, the Department of Defense, and from Industry.
Monday, February 21, 2011 at 13:00 in EV002.260
Debugging semantic errors remains one of the most time-consuming, and
sometimes frustrating, efforts in developing and maintaining programs.
A semantic error is uncovered, and the programmer then begins multiple
iterations within a debugger in order to build up a hypothesis about
the original program fault that caused the error. Examples of semantic
errors include segmentation fault, assertion failure, infinite loop,
deadlock, livelock, and missing synchronization locks.
This talk describes a debugging approach based on a reversible debugger,
sometimes known as a time-traveling debugger. This is a more natural
approach, since it allows a programmer during a single program run to work
backwards from semantic error to earlier fault, and still earlier to the
original causal fault. A new tool, reverse expression watchpoints, allows
one to begin with a program error and an expression that has an incorrect
value, and automatically bring the programmer backwards in time to a point
at which the expression first took on an incorrect value. This tool is
part of a long-term project in which a series of such tools is planned,
each tool customized for a different class of semantic errors.
The long-term goals described here are motivated by an analogy between
syntax errors and semantic errors:
* Currently, syntax errors are easily diagnosed by compilers that bring
the programmer directly to the line number, withing a textual program,
that led to the bad syntax.
* In the future, semantic errors will be easily diagnosed by a new class
of reversible debugger tools that bring the programmer directly to the
point in time, within a familiar debugging environment, that led to the
later semantic error.
The reversible debugger is itself based on a fast, transparent checkpointing
package for Linux: DMTCP (Distributed MultiThreaded CheckPointing).
DMTCP can checkpoint such varied programs as Matlab, OpenMPI, MySQL,
Python, Perl, GNU screen, Vim, Emacs, and most user-developed programs,
regardless of the implementation language. No kernel modification or
other root privilege is needed. Of particular interest for this talk
is the ability of a customized version of DMTCP to checkpoint an entire
gdb session. The reversible debugger also supports weak determinism for
purposes of debugging multi-threaded programs. The current implementation
has been demonstrated robust enough to run such large, real-world programs
as MySQL and Firefox.
Gene Cooperman received his Ph.D. from Brown University in 1978. He spent
two years as a post-doc, followed by six years at GTE Laboratories.
He has been a professor at Northeastern University since 1986, and
a full professor since 1992. His interests lie in high performance
computation and symbolic algebra. He has developed Task-Oriented
Parallel C (TOP-C/C++), a model for writing parallel software easily.
More recently, he has worked with novel applications of transparent
checkpointing, such as checkpointing symbolic debuggers and checkpointing
individual graphics-based processes with a graphics desktop. His DMTCP
checkpointing project provides a robust platform for this purpose, while
not requiring modifications to the application or kernel/run-time library.
His disk-based parallel computation project (joint with Daniel Kunkle)
is based on the Roomy language extension, and translates traditional
RAM-intensive computations into scalable computations based on parallel
disks. Finally, he works on the semi-automatic source-level translation
of single-threaded task-oriented programs into multi-threaded programs
with a small footprint. This work is an important focus of his ongoing
collaboration with CERN, and the work is motivated by the requirements
of future many-core CPU chips. He leads the High Performance Computing
Laboratory at Northeastern University, where he currently advises four
PhD students. He has over 80 refereed publications.
November 26, 2010 at 1:00PM in EV2.260
Multi-core processors are now common but musical and audio applications
that take advantage of multiple cores are rare. The most popular music
software programming environments are sequential in character and
provide only a modicum of support for the efficiencies to be gained from
parallelization. We provide a brief summary of existing facilities in
the most popular languages and provide examples of parallel
implementations of some key algorithms in computer music such as
partitioned convolution and non-negative matrix factorization NMF.We
follow with a brief description of the SEJITS approach to providing
support between the productivity layer languages used by musicians and
related domain experts and efficient parallel implementations.We also
consider the importance of I/O in computer architectures for music and
audio applications. We lament the fact that current GPU architectures as
delivered in desk and laptop processors are not properly harnessed for
low-latency real-time audio applications.
From his high school years onwards, David Wessel's musical
activities were central to his life and after his PhD in Psychology he
committed himself to blending his science and technology skills with his
musical interests.In 1976, at the invitation of Pierre Boulez, he moved
to Paris to work as a researcher at the then nascent /Institut de
Recherche et Coordination Acoustic/Musique/IRCAM where he remained until
1988. For his work at IRCAM he was recognized as /Chevalier dans l'Ordre
des Arts et des Lettres/by the French Minister of Culture.
In 1988, he arrived at UC Berkeley as Professor of Music with the charge
of building the interdisciplinary Center for New Music and Audio
Technologies (CNMAT).He organized CNMAT as a laboratory wherein both
science and technology people interact on daily basis with musicians.
Wessel insists on an instrumental conception – the computer as musical
instrument equipped with gesture sensing devices and sound diffusion
Date and Location October 13th, Room EV3.309, 1:00 PM
We start the tour with observations about the implementation of rotation in current three-dimensional graphics programming. Standard texts convey the impression that the mathematics is conventional and that the main problem is to find a compromise between performance, precision, and numerical stability. The actual situation is more interesting.
The foundations of modern graphics programming were laid in the mid-nineteenth century by mathematicians such as Hamilton, Cayley, and Gibbs. Presentations create the illusion of coherence and completeness, but closer inspection of the original work reveals gaps, oddities, and a curious link between fermions, bosons, and Balinese candle dancers.
Other nineteenth-century mathematicians, such as Grassmann, Rodrigues, Clifford, and Lie, produced consistent and elegant systems that, for various reasons, have not achieved the attention that they deserve in graphics and other fields. However, bits and pieces of these systems have been exploited by physicists for many years. Recently, there have been efforts to rebuild mechanics and physics on a single algebraic foundation. Algebraic techniques have also been introduced into graphics programming and may eventually come to dominate the field. We end the tour with a glimpse of a possible future for rotation in graphics programming.
We study so-called betweennesses induced by graphs
as well as set systems. Algorithmic problems related
to betweennesses are typically hard. They have been
studied as relaxations of ordinal embeddings and occur
for instance in psychometrics and molecular biology.
Our contributions are hardness results, efficient
algorithms, and structural insights such as complete
This is joint work with V. Santos, P.M. Schaefer, and J.L. Szwarcfiter
EV3.309, CSE Department, Concordia University
1515 Ste Catherine West, 3d floor
Montreal, Quebec H3G 1M8
The original Mentor is a character in Homer?s epic poem * The Odyssey * . When Odysseus,
King of Ithaca, went to fight in the Trojan War, he entrusted the care of his kingdom to
Mentor. Mentor served as the teacher and overseer of Odysseuss? son, Telemachus.
In today's corporate nomenclature, mentorship refers to the relationship in which a more
experienced or more knowledgeable person helps a less experienced or less knowledgeable
person ? often referred as protégé or mentee. However there are many avenues to mentor or
be mentored. This talk will discuss the speaker's experiences with mentoring.
Speaker: Jennifer Ng , IEEE WIE Ottawa.
Jennifer obtained her Bachelor of Electrical Engineering from McGill University ,
Montréal, Canada (B.Eng ?94) and recently moved back to Canada after decade in the US.
She works in Regulatory Affairs for Medical Devices at Abbott Point of Care in Ottawa.
Jennifer has been a member of IEEE since 1990 and became a member of Women In
Engineering (WIE ) in 1996. She has been involved in mentoring students (McGill Mentor
Program) as well as peer IEEE members (IEEE mentoring service) over the past several
years. For her full biography, go to http://www.jenniferng.org
For more information, please, visit the IEEE WIE Montreal website at
DATE : Thursday, October 15 th , 2009, TIME: 5:45 p.m., LOCATION: EV 3.309
Date: July 03, 2009 at 10:30 AM
Location: EV003.309, 1515 St Catherine Street West
Speaker: Prof. K. K. Biswas
Title: Recognizing individuals from the energy component of their walks
Image based human recognition methods such as fingerprints, palms,
face, ear, iris etc. require the subject to cooperate to provide the
relevant data. Recently gait has emerged as a new biometrics which is
non-obtrusive in nature and concerns recognition of individuals by the
way they walk. The spatial and temporal shape of motion of an individual
is usually the same for all gait cycles and is considered to be unique
to that individual. This talk will present schemes which make use of
gait energy image representation. This basically involves capturing the
human motion in a single image while preserving the temporal gait
characteristics of the individual. The image does get disturbed when the
subject is carrying a bag or wearing an overcoat. We shall illustrate
how these effects can be minimized by using the spatio-temporal motion
features through results on a large gait data set.
Dr. K. K. Biswas is Professor in the conmputer science and engineering
department of IIT Delhi, India since 1988. He has extensive research
experience in the areas of image processing and computer vision. His
primary areas of research include Fuzzy logic for content based image
retrieval, Video Segmentation & Categorization, Gait recognition
technology for biometrics and Soft computing based activity recognition
in video clips. He was visiting faculty in University of Central Florida
during the period 2003-2007 and is member of the editorial board of
international journals in his research field.
Date: June 16, 2009 at 18:00 PM
Location: EV003.309, 1515 St Catherine Street West
Speaker: Emil Vassev, University College Dublin
Title: Engineering Autonomic Systems with ASSL
Since its introduction in 2001 by IBM, autonomic computing has inspired many initiatives for self-management of complex systems. The Autonomic System Specification Language (ASSL) is an initiative that provides a framework for the specification, validation, and code generation of autonomic systems. A formal method dedicated to autonomic computing, ASSL helps researchers with problem formation and system design, analysis, evaluation, and implementation. The ASSL formal notation is a hierarchical specification model defined through formalization tiers. The framework provides a toolset that developers can use to edit and validate ASSL specifications and generate Java code. The current validation approach is a form of consistency checking performed against a set of semantic definitions. Currently, different verification mechanisms for automatic reasoning are under development, such as model checking support for both specification and post-implementation phases of the software lifecycle. ASSL has been successfully used to make existing and prospective complex systems autonomic. Here, autonomic properties have been specified and prototype models have been generated for two NASA projects ? the Autonomous Nano-Technology Swarm concept Mission and the Voyager mission.
Date: 4th, June 2009 at 10:00 AM
Location: EV003.309, 1515 St Catherine Street West
Speaker: Dr. Chi Hau Chen, University of Massachusetts Dartmouth
Title: Signal Processing in Pattern Recognition
While the progresses in pattern recognition and signal processing have been going on nearly in parallel, in the past 50 years, the convergence of the two fields has been quite evident, especially on using signal (image) processing and modeling in preprocessing and feature extraction for pattern recognition. A good example is the transform methods in signal (image) processing which are used extensively in pattern recognition. In this talk we will examine the signal processing in pattern recognition applications with seismic, sonar and ultrasonic testing signals as well as remote sensing images. Special focus is placed on statistical pattern recognition issues in remote sensing. While pattern recognition applications are so diverse, signal processing has provided a common step toward building more effective pattern recognition systems. ------ Chi Hau Chen received his Ph.D. in electrical engineering from Purdue University in 1965. He has been a faculty member with the University of Massachusetts Dartmouth (UMass Dartmouth) since 1968 where he is now Chancellor Professor. He was the director of NATO Advanced Study Institute on Pattern Recognition and Signal Processing, held at ENST, Paris, 1978. Dr. Chen was the Associate Editor of IEEE Trans. on Acoustics, Speech and Signal Processing from 1982 to 1986, Associate Editor on information processing for remote sensing of IEEE Trans. on Geoscience and Remote Sensing 1985 to 2000. He is an IEEE Fellow 1988, Life Fellow 2003, and also a Fellow of International Association of Pattern Recognition (IAPR) 1996. He has been an Associate Editor of International Journal of Pattern Recognition and Artificial Intelligence since 1985, and on the Editorial Board of Pattern Recognition Journal since 2009. In addition to the remote sensing and geophysical applications of statistical pattern recognition, he has been active with the signal and image processing of medical ultrasound images as well as industrial ultrasonic data for nondestructive evaluation of materials He has published 25 books in his areas of research interest.
Date: 15, April 2009 at 12:00
Location: EV002.184, 1515 St Catherine West, Montreal
Speaker: U.S.R. Murty
Title: The Perfect Matching Polytope and Solid Bricks
The perfect matching polytope of a graph G, denoted here by Poly(G), is the convex hull of the set of incidence vectors of perfect matchings of G. Edmonds (1965) showed that a vector x in R^E belongs to the perfect matching polytope of G if and only if it satisfies theinequalities: (i) x \geq 0 (non-negativity), (ii) x(\partial(v)) = 1, for all v in V (degree constraints) and (iii) x(\partial(S))\geq 1, for all odd subsets S of V (odd set constraints). We are interested in the problem of characterizing graphs whose perfect matching polytopes are determined by non-negativity and the degree constraints. It is well-known that bipartite graphs have this property. An graph is an Edmonds graph if the description of Poly(G) requires at least one odd set constraint. The Edmonds Graph Recognition Problem (EGP) is the problem of recognizing if a given graph is an Edmonds graph. By Edmonds? Theorem, EGP is in NP. We showed that for planar graphs EGP is in P. But, in general, we do not even know if EGP is in co-NP. In this talk I shall present a characterization of Edmonds graphs. A class of graphs known as solid bricks arise as important examples of non-Edmonds graphs.
Based on joint work with M. H. de Carvalho and C.L. Lucchesi. .**
Date: April 23, 2009 at 14:30
Location: EV002.184, 1515 St Catherine West, Montreal
Speaker: T.C. Nicholas Graham
Title: Supporting Adaptive Mobile Collaboration
Recent years have seen a proliferation of exciting new mobile devices, such as Smartphones, Netbooks and ultra-light laptops. These provide ever more ways for people to communicate and collaborate on the go. Programming collaborative applications over mobile devices is challenging, as such applications must be high- performance, robust in the presence of failure (such as batteries dying or losing network connection), and easy to use in a mobile environment.
In this talk, I will present Fiia, a middleware toolkit aiding the development of collaborative applications in a mobile setting. Fiia's approach is model-based, allowing developers to manipulate a high-level conceptual model of their system, while a runtime refinery automatically resolves issues of distribution and partial failure.
Fiia has been used to develop systems as diverse as a collaborative game prototyping environment, a smartphone-based presentation tool, and a tabletop-based furniture sales system.
Speaker: Dr. Vasek Chvatal
Date: Monday, March 2, 2009
Location: EV 3.309, 1515 St Catherine St, Montreal
A point in the plane is said to lie between points A and C if it is the interior point of the line segment joining A and C. In his development of geometry, Euclid neglected to give the notion of betweenness the same axiomatic treatment that he gave, for instance, to the notion of equality. This omission was rectified twenty-two centuries later by Moritz Pasch: http://www-groups.dcs.st-and.ac. uk/~history/Biographies/Pasch.html
During the twentieth century, geometric betweenness was generalized in diverse branches of mathematics to ternary relations of metric betweennes, lattice betweenness, and algebraic betweenness. I will talk about three settings where such abstract betweennesses show up.
The first of these settings is ordered geometry; there, primitive notions of points and lines are linked by the relation of incidence and by axioms of betweenness; two classic theorems here are the Sylvester-Gallai theorem http://mathworld.wolfram.com/SylvestersLineProblem.html
and the de Bruijn-Erdos theorem. I conjectured in 1998 http://users.encs.concordia.ca/~chvatal/newsg.pdf and Xiaomin Chen proved in 2003 http://dimacs.rutgers.edu/TechnicalReports/abstracts/2003/2003-32.html that the Sylvester-Gallai theorem generalizes to metric spaces when lines in these spaces are defined right; together, we conjectured http://arxiv.org/abs/math.CO/0610036 that the de Bruijn-Erdos theorem also generalizes to metric spaces when lines in these spaces are defined right (with "right" having a different sense in each of the two instances); the two of us and Ehsan Chiniforooshan have partial results on this conjecture.
The second of the three settings is abstract convexity; there, families of sets called "convex" obey certain axioms. Such finite structures are called convex geometries when they have the Minkowski-Krein-Milman property: every set is the convex hull of its extreme points. Two classical examples of convex geometries come from shelling of partially ordered sets and simplicial shelling of triangulated graphs. Last June I characterized, by a five-point condition, a class of betweennesses generating a class of convex geometries that subsumes the two examples. http://users.encs.concordia.ca/~chvatal/abc.pdf Laurent Beaudou, Ehsan Chiniforooshan, and I have additional results on such betweennesses.
The last setting lies between physics and philosophy: in his effort to develop a causal theory of time, Hans Reichenbach http://en.wikipedia.org/wiki/Hans_Reichenbach introduced the notion of causal betweenness, which is a ternary relation defined on events in probability spaces. This January, Baoyindureng Wu and I characterized, by easily verifiable properties, abstract ternary relations isomorphic to Reichenbach's causal betweenness.
A nice connection with a 1979 theorem of Jarda Opatrny
The joint work with Laurent Beaudou, Ehsan Chiniforooshan, and Baoyindureng Wu was done in our research group ConCoCO http://users.encs.concordia.ca/~concoco/
(Concordia Computational Combinatorial Optimization).
Vasek Chvatal got his PhD in mathematics from the University of Waterloo in 1970. Before joining Concordia in June 2004 as its first Canada Research Chair in Tier 1, he taught mathematics, operations research, and computer science at McGill, Stanford, Universite de Montreal, and Rutgers. Information about his research is available at http://users.encs.concordia.ca/~chvatal/