Structured prediction has been successfully applied to problems with strong interdependencies among output variables. In the realm of natural language processing (NLP), various tasks are formulated into structured prediction problems. Exist exact inference methods for sequences and trees. For tasks with general output structures, e.g., the pairwise fully connected undirected graph, the exact inference problem is intractable. In such cases, approximate inference is usually pursued to obtain an approximate solution. The major advantage of structured prediction models suchas conditional random fields (CRFs) and structural support vector machines (structural SVMs) is that their learning models can easily integrate prior knowledge of a specific domain by feature engineering. The discriminative models of CRFs can account for overlapping features (e.g., first-order or even higher order linear chain) on the whole observation sequence. On the other hand, structural SVMs rely on joint feature maps over the input–output pairs, where features can be represented equivalent to those of CRFs. the last decade, structured prediction algorithms have seen much effort devoted to modeling the interdependencies among the output variables, but less consideration has been given to feature engineering, which is a nontrivial and tedious task for general users. They features generated from arbitrary templates may be redundant or non informative. Structured prediction models, such as CRFs and structural SVMs, only treat the features generated from each template equally without exploiting the importance of each template or its generated features. Some improper templates may generate conflicting or noisy features which may degrade these structured prediction models.
You are here: Home / ieee projects in trichy / Structured prediction using generalized p-block norm regularization