Multi-view Learning with Perceptron for Dog Tail Displacement Identification

Authors

  • Alejandra Ramos Porras

Keywords:

Deep learning, image segmentation, major temporal arcade, u-net attention architecture, parameter tuning

Abstract

This paper presents a novel approach to
enhancing human-canine communication through data
fusion techniques that analyze tail displacement patterns
in dogs. By integrating data from multiple viewpoints—
specifically, the tail tip, hip, and neck—this study aims to
improve the automatic interpretation of canine signals.
Tail displacements to the right are generally associated
with positive emotions, while leftward displacements
suggest negative emotions. A Perceptron model was
developed using this fused data and compared with a
previous Perceptron model that used only tail-tip data.
Various performance metrics, including accuracy,
precision, recall, and F1-scores, and a statistical test
was performed to identify which Perceptron model is the
best at identifying these displacements.

Published

2026-04-20