Publicación:
Reconocimiento de acciones cotidianas
Reconocimiento de acciones cotidianas
dc.contributor.author | Vizconde La Motta, Kelly | es_PE |
dc.date.accessioned | 2024-05-30T23:13:38Z | |
dc.date.available | 2024-05-30T23:13:38Z | |
dc.date.issued | 2016 | |
dc.description.abstract | The proposed method consists of three parts: features extraction, the use of bag of words and classification. For the first stage, we use the STIP descriptor for the intensity channel and HOG descriptor for the depth channel, MFCC and Spectrogram for the audio channel. In the next stage, it was used the bag of words approach in each type of information separately. We use the K-means algorithm to generate the dictionary. Finally, a SVM classi fier labels the visual word histograms. For the experiments, we manually segmented the videos in clips containing a single action, achieving a recognition rate of 94.4% on Kitchen-UCSP dataset, our own dataset and a recognition rate of 88% on HMA videos. | |
dc.description.sponsorship | Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica - Concytec | |
dc.identifier.uri | https://hdl.handle.net/20.500.12390/2060 | |
dc.language.iso | spa | |
dc.publisher | Universidad Católica San Pablo | |
dc.rights | info:eu-repo/semantics/openAccess | |
dc.rights.uri | http://creativecommons.org/licenses/by-nc/4.0/ | |
dc.subject | SVM | |
dc.subject | STIP | es_PE |
dc.subject | HOG | es_PE |
dc.subject | Spectogram | es_PE |
dc.subject.ocde | https://purl.org/pe-repo/ocde/ford#1.02.01 | |
dc.title | Reconocimiento de acciones cotidianas | |
dc.type | info:eu-repo/semantics/masterThesis | |
dspace.entity.type | Publication | |
oairecerif.author.affiliation | #PLACEHOLDER_PARENT_METADATA_VALUE# |