Hidden conditional random fields (HCRFs) are discriminative models that learn the joint distribution of a class label and a sequence of latent variables conditioned on a given observation sequence, with dependencies among latent variables expressed by an undirected graph. HCRFs learn not only the hidden states that discriminate one class label from all the others, but also the structure that is shared among labels. the HCRFs is that finding the optimal number of hidden states for a given classification problem is not always intuitive, and learning the correct number of states is often a trial-and-error process involving cross-validation, it can be computationally very expensive. Our technique to automatically learn the hyperparameters based on beam sampling, which is an MCMC sampling method that has successfully been used to sample whole trajectories for the iHMM .Beam sampling achieves forward filtering and backwards sampling for a chain of latent variables by introducing an auxiliary variable ut for each latent variable st. This limitation motivated our nonparametric HCRF model, which automatically learns the optimal number of hidden states given a specific dataset. the past decade, nonparametric methods have been successfully applied to many existing graphical models, allowing them to grow the number of latent states as necessary to fit the data. A prominent and well-studied example is the infinite hidden Markov model (iHMM or HDP-HMM). Hierarchical Dirichlet Process (HDP)-driven HMM with an infinite number of potential hidden states. The main contribution of this brief is the use of HDPs to allow an infinite number of hidden states for iHCRF. Since exact inference for an infinite model is intractable, approximation method based on beam sampling, a Markov-chain Monte Carlo (MCMC) sampling technique used effectively to sample iHMM models.