Skip to main content

Tools and evaluation methods for discussion and presentation skills training

Abstract

Our university is currently developing an advanced physical-digital learning environment that can train students to enhance their discussion and presentation skills. The environment guarantees an efficient discussion among users with state-of-the-art technologies such as touch panel discussion tables, digital posters, and an interactive wall-sized whiteboard. It includes a data mining system that efficiently records, summarizes, and annotates discussions held inside our facility. We also developed a digital poster authoring tool, a novel tool for creating interactive digital posters displayed using our digital poster presentation system. Evaluation results show the efficiency of using our facilities: the data mining system and the digital poster authoring tool. In addition, our physical-digital learning environment will be further enhanced with a vision system that will detect interactions with the digital poster presentation system and the different discussion tools enabling a more automated skill evaluation and discussion mining.

Introduction

Recently, a lot of attention has been paid to evidence-based research, such as life-logging Sellen and Whittaker (2010) or big data applications Armstrong (2014), that proposes techniques to raise the quality of human life by storing and analyzing data of daily activities in large quantities. This technique has been applied in the education sector but a key method has not been found yet because it is generally hard to record intellectual activities, accumulate and analyze data in a large scale, and compare it with a person’s physical activities, position, movement information, and the like. Although there are some recent studies on the automated recording of intellectual activities in more detail, their techniques are not sufficient to be applied to an automated evaluation of a person’s intellectual activities. Thus, this study aims to develop a new environment to empower the skills of students not only in real-time but also offline based on the abundant presentation and discussion data analyses.

Our study focuses on the new graduate leading program of Nagoya University that aims to cultivate future industrial science leaders. This leading graduate program has a new physical-digital environment for facilitating presentations and discussions among the selected students of the program. In particular, the presentations and discussions of the students are recorded in detail, and the mechanism for knowledge emergence is analyzed based on a discussion mining system. Furthermore, we have evaluated the performance of some students with respect to their skill in creating a digital poster using our recently developed tool.

Related work and motivation

This section has two parts: discussion evaluation and presentation evaluation. For each evaluation system, there are also two parts based on the type of system: a fully-automatic system and a semi-automatic system. A fully-automatic system calculates the scores of discussion or presentation quality in an automated fashion while a semi-automatic system supports the people in evaluating the discussion or presentation with some evidential data.

Discussion evaluation

Fully-automatic

With the abundant data in discussions, there is difficulty in searching for good quality posts. An automatic rating of postings in online discussion forums was presented based on a set of metrics Wanas et al. (2008). This set of metrics was used to assess the value of a post and includes the following: relevance, originality, forum-specific features, surface features, and posting-component features. With these metrics used to train a non-linear support vector machine classifier, the posts were then categorized to their corresponding levels (High, Medium, or Low).

Another system called Auto-Assessor used natural language processing tools to assess the responses of students to short-answer questions Cutrone et al. (2011). The system utilized a component-based architecture with a text pre-processing phase and a word/synonym matching phase to automate the marking process. In their system evaluation, they compared the assessment results of the Auto-Assessor and Human Graders to verify the possibility of applying the proposed system in practical situations.

However, these fully-automatic systems still have some drawbacks. Some methods are language independent resulting in a poor performance in relevance and originality Wanas et al. (2008) thus, other additional techniques should be employed in their assessment of discussions. Also, even with additional NLP techniques, the weights given to words are not varied Cutrone et al. (2011) hindering the system from identifying words that are more significant than others.

Semi-automatic

Aside from fully-automatic systems, some studies employed a semi-automatic approach. One such study is the implementation of a group discussion evaluation method and a discussion evaluation support system that focused on ex post evaluation Omori et al. (2006). The system provided a Web-based interface to display the evaluation item and the evaluation criteria so that users can easily make a score to each of the discussion remarks based on clearness of remarks, proposal of issues, and logicality of remarks. Results confirmed the effectiveness of both their evaluation method and support system.

With the above-mentioned systems, there was no mention about one problem in discussions, which is the difficulty in getting students to actively participate. Thus, a gamification framework was integrated to a discussion support system for enhancing and sustaining motivation in student discussions Ohira et al. (2014). Besides sustaining student motivation, the system also evaluates and visualizes improvement of the students’ capacity to discuss. It also supports the users to evaluate the quality of each discussion statement.

However, with the two semi-automatic systems, more experiments are needed to determine the effect of teachers’ feedback to the students Omori et al. (2006) and its performance in real-world settings.

Presentation evaluation

Fully-automatic

A presentation training system called Presentation Sensei was implemented to observe a presentation rehearsal and give presentation feedback to the speaker Kurihara et al. (2007). The system is equipped with a microphone and camera to analyze the presentation by combining speech and image processing techniques. Based on the results of the analysis, the system provides the speaker with recommendations for improving presentation delivery such as speed and audience engagement. During the presentation, the system can alert the speaker when some of the indices: speaking rate, eye contact with the audience, and timing, exceed predefined warning thresholds. After the presentation, the system generates summaries of the analysis results for the user’s self-examination. Although this system focuses on self-training, it still needs to be tested in a real presentation environment.

Semi-automatic

Another presentation training system called PitchPerfect was implemented to develop confidence in presentations Trinh et al. (2014). From interviews with presenters, the authors uncovered mismatches between best rehearsal practices as recommended in the presentation literature, the actual rehearsal practices, and support for rehearsal in conventional presentation tool. Thus, they developed the proposed system, an integrated rehearsal environment that supports users to evaluate their presentation performance during preparation for structured presentation in PowerPoint. Their user study with 12 participants demonstrated that PitchPerfect led to small but significant improvements in perceived presentation quality and coverage of prepared content after a single hour of use, arising from more effective support for the presenter’s content mastery, time management, and confidence building.

Motivation

In the initial phase of our research, we select a semi-automatic approach to evaluate the discussion and presentation. However, our proposed system can acquire several kinds of student activity-related data so as to make evaluation automated in the near future. We understand that current technologies to analyze human activity data fully-automated are still insufficient to realize our purposes so we focus on data acquisition by using our new environment for discussion and presentation.

Leaders’ Saloon: a new physical-digital learning environment

The Leaders’ Saloon shown in Figure 1 is capable of creating discussion contents using the discussion tables, the digital poster panels, and the interactive wall-size whiteboard.

Figure 1
figure 1

Leaders’ Saloon environment.

Discussion table

Each student uses a tablet to connect with the facilities including the discussion table shown in Figure 2. The content and operation history of the discussion table are automatically transferred and shared to the server, the meeting cloud. Previous table contents can be easily retrieved and any texts or images can be reused. Such reference and quotation operations are recorded and analyzed to discover semantic relationships between discussions. Furthermore, a software that analyzes temporal changes of table contents with the corresponding users is also being developed.

Figure 2
figure 2

Students using the discussion table.

Digital poster panel

For poster presentations, a digital poster panel system, shown in Figure 3, is used for content and operation analyses. The system helps the users create digital posters and analyze their creation process. The system also supports the retrieval of previously presented posters and allows users to annotate them. Annotations are automatically sent to the author and are analyzed by the system to evaluate the quality of the poster. Poster presentations as well as the regular slide-based presentations are also broadcasted by streaming on the Web. The system collects and analyzes the feedbacks based on comments and reviews given by Internet viewers (e.g., Twitter users can associate their tweet messages with any scenes from the presentation based on the starting and ending timestamps).

Figure 3
figure 3

Poster presentation using the digital poster panel system.

Interactive wall-sized whiteboard

As shown in Figure 4, our facility houses a wall-sized whiteboard. Unlike the traditional whiteboards, we are able to physically and digitally write on the whiteboard. We use a special projector equipped with an infrared sensor to detect the location of the digital pen with respect to the wall. The writings and interaction on the whiteboard can then be recorded by cameras. The captured data using the camera can identify the physical interaction in combination with the given digital interaction information. This system is under development and we are working on proposing a new evaluation system that can enhance the presentation and discussion performance of students using this system.

Figure 4
figure 4

Capturing data using the interactive wall-sized whiteboard.

Discussion mining system

The discussion mining system generates knowledge discovery from discussion contents during face-to-face meetings. This previously developed system Nagao et al. (2005), shown in Figure 5, generates structured minutes for meetings semi-automatically and links them with audiovisual data. This system summarizes discussions using a personal device, which captures information, called the discussion commander. The created content is then viewed using the discussion browser mentioned later, which provides a search function that lets users browse the discussion details.

Figure 5
figure 5

System overview of discussion mining.

Recording and structuring discussions

Discussions in our meetings are automatically recorded and these meeting records are composed of structured multimedia contents including texts and videos. In the contents, meeting scenes are segmented based on discussion chunks. The segments of contents are connected with visual and auditory data corresponding to the segmented meeting scenes.

Previous studies on structuring discussions and supporting discussions by referring past structured discussion contents include IBIS and gIBIS Conklin and Begeman (1988) that consider semantic discussion structures. However, most studies that provide technology for discussions and minutes generations have focused on automatic recognition techniques for auditory and visual data. For example, Lee et al. (2002) proposed a method that records the participants’ actions using cameras and microphones and then produces indexed minutes using automatic recognition techniques. Chiu et al. (2001) integrated audio-visual information and information for presentation materials.

We analyze meetings not only with natural language processing to support the comprehension of arguments in a discussion but also form diversified perspectives using auditory and visual information in slides as well as other presentation content. We also use metadata to deal with discussion content. Overall, our discussion mining system supports the creation of minutes for face-to-face meetings, records the meeting environment with cameras and microphones, and generates meta-information that relates elements in the contents.

In addition, the system can graphically display the structure of a discussion to facilitate understanding of the minutes and encourage effective statements. Our discussion commander has some functionality for discussion facilitation such as pointing/highlighting some areas and underlining some texts in the presentation slides displayed on the main screen. We also developed a method to define visual referents in the presentation slides that are pointed and referred by meeting participants.

Our method can handle sharing and re-referring the visual referents. This method then contributes to finding central topics of the discussion chunks. A discussion chunk has a tree structure and it consists of participants’ utterances and relationships between two utterances. An utterance has one of two types: start-up and follow-up. The start-up type is assigned to the utterance when it introduces a new topic while the follow-up type is assigned when the utterance inherits the predecessor’s topic. The discussion content of a meeting has several discussion chunks that have tree structures of utterances as shown in Figure 6.

Figure 6
figure 6

Discussion structure.

The summarization of discussion content is performed as follows:

Based on common visual referents in utterances included in discussion chunks, a graph structure is generated. Spreading activation is applied to the graph structure where external inputs are assigned based on marking agreeable/disagreeable utterances which are decided by using discussion commander. Highly activated utterances are selected as more significant elements of the content. The discussion browser allows the users to adjust some parameters such as the ratio of summary and the weight of marking. The whole system provides functions of generating and publishing multimedia meeting records and their in-depth search and summarization.

On-time visualization of discussion structures and histories of visual referents contributes to the facilitation of current discussion and modification of discussion structures by changing parent nodes of follow-up utterances and by re-referring previous visual referents. Such modification is performed using each participant’s discussion commander. The discussion commander also works for annotating agree or disagree attributes to the current utterance by pressing + or - buttons. The time of pressing the buttons, the user who pressed the buttons, and the target utterance are recorded and used for summarization. The target utterance of the agree annotation has a high-valued external input when spreading activation is performed.

Since our main mission is to train students’ discussion skills, the previous system was extended and new functions were added in order to obtain user-specific data such as the quality of statements and level of understanding the discussions, which led to the creation of the Leaders’ Saloon (Section ‘Leaders’ Saloon: a new physical-digital learning environment’).

Discussion browser

The information accumulated by the discussion mining system is presented synchronously in the discussion browser shown in Figure 7. This system consists of a video view, a slide view, a discussion view, a search menu, and a layered seek bar.

Figure 7
figure 7

Discussion browser interface.

The discussion browser provides the function of searching and browsing discussion details in correspondence to the users’ requests. For example, when the participant of the meeting wants to refer to certain important previous discussions, the participant will search for the statements using keywords or the speakers’ names, and then browse the details of the statements in the search results. Users who did not participate in the meeting can search and browse the important meeting elements displayed in the layered seek bar by inquiring into discussions containing statements that form agreements by using the discussion commanders, or by surveying the frequency distributions of keywords.

Video view

The video view provides recorded videos of the meeting, including the participants, presenter, and screen. The participant video shows the scene of the speaker if the speaker is not a presenter or the whole span of the meeting room if the speaker is the presenter.

Discussion view

The discussion view consists of text forms, in which the contents of the discussion primarily constitute of information inputted by a secretary and relation links, which visualize the structure of the discussion. This view supports the understanding of the contents of the discussion, because the users can survey the structure of the discussion. The user can also tag the meeting contents for searching by selecting accurate tags from a tag cloud containing tags extracted from the text of statements and presentation materials.

Search menu

In the search menu, three types of search queries are available: speaker name, the target of the search (either the contents of the slide or the statement, or both), and keywords. The users will search for the necessary information using combined queries. The search results are shown in the layered seek bar (matched elements in the timeline are highlighted) and in the discussion view (discussions where the matched elements appear are highlighted).

Layered seek bar

The elements that compose a meeting content are displayed in the layered seek bar. Various bars are generated according to each type of element and it also presents the details. The left edge of each bar corresponds to the start time of the meeting, and the right edge corresponds to the end time. The discussion browser enables effective reuse of meeting contents. Additionally, summarization is possible by acquiring relevant discussion from links between statements. The entire operation history of the discussion media browser is saved in the database. This history is used for the personalization of meeting contents.

Importing discussion mining system into Leaders’ Saloon

We developed an extended version of the discussion mining system working at the Leaders’ Saloon. The discussion tables are used to operate and visualize discussion structures. The users also use discussion commanders and the previously described discussion mining system.

In this section, we explain two systems implemented on the discussion tables to visualize real information recorded by the discussion mining system: (1) discussion visualizer, a system to visualize the structure of an ongoing discussion, and (2) discussion reminder, a system to retrieve and visualize past discussions.

Discussion visualizer

The discussion visualizer shown in Figure 8 is a system to visualize the structure of meeting discussions shown in the discussion table (Section ‘Discussion table’). This visualizer consists of a meeting view, a slide list, a discussion segment view, and a discussion segment list.

Figure 8
figure 8

Discussion visualizer interface and sample content.

The meeting view provides a preview of camera records showing the participants, a list of all attendances, and elapsed time of presentation. A list of slide thumbnails displayed in the presentation is also shown and the thumbnail of the currently displayed slide is emphasized in the slide list. Speakers can operate the slide show by selecting the thumbnail in this view using the touch panel.

The discussion segment view shows the information about the discussion segment, which contains the current statement. The texts of the start-up statement, which was the trigger of the discussion, and the parent statement of the current statement (if it is a follow-up statement) are shown at the upper side of this view. The structure of the discussion segment is shown at the bottom side of this view. Users can also make corrections of parent statements. Participants confirm the stream of discussion at the meeting through the discussion segment list. In this list, the nodes representing main topics are shown as rectangle nodes while the subtopics are shown as circle nodes. These discussion segment topics are displayed as a chain structure in the middle, the keywords of multiple discussion segments are displayed on the left, and the keywords of the main topics or subtopics are displayed on the right. Moreover, the nodes that involve questions and answers are represented by the specific character Q. The amount of agreements on the statements inputted by the discussion commanders is represented as a density of the color of the nodes. The icons are displayed next to the node containing the statements marked by discussion commanders. Therefore, it enables participants to confirm when important discussions occur.

There are various kinds of discussion segments created by the discussion mining system. For example, short segments with only comments on the presentation and long segments that contain a lot of statements as a result of a hot debate. There is also a possibility that the long discussion segments have follow-up statements whose content derives from the topic of the start statement. Thus, we think that the start statement is the root node of the discussion segment and some subtopics derive from this root node.

Discussion reminder

A review and sharing of previous discussion contents lead to a uniformed knowledge level among all participants, wherein low-level participants can make remarks actively. This will also prevent redundant discussion. From here, we can then think about topics from a new point of view and figure out solutions to problems that have not been solved due to lack of technology. Therefore, we develop a system to retrieve and browse past discussions on time, called discussion reminder.

There are two concerning issues in the development of the discussion reminder. One is an accurate understanding of discussion contents, and another issue is the quick retrieval of discussion contents preventing any disruption in the ongoing discussion. Unclear and inadequate sharing of discussion contents will inhibit the achievement of a uniformed knowledge level and will lead to misunderstandings and confusion. Thus, the discussion reminder provides a function to browse videos of past discussions for accurate understanding.

However, all of the participants need to interrupt the ongoing discussion for a review of discussion contents, thus it is desirable to finish the review in no time using the above method to find the things required in the audiovisual information. For an efficient review, the discussion reminder provides an interface to narrow down the browsing information, such as discussion content matched with queries, slides matched discussion content, and statements associated with matched slide, and to retrieve cooperatively by participants. A participant who notices the existence of the discussion, which he/she wants to review, inputs queries to the discussion reminder. Various types of information, such as names of presenters, dates of meetings, and keywords, are available as queries. The contents of retrieved results are displayed on the discussion table as shown in Figure 9.

Figure 9
figure 9

Result contents of the discussion reminder.

Participants conduct various operations using the touch panel in this interface. This interface consists of a discussion content list, a slide list, and a discussion segment view. The discussion content list displays titles of the discussion contents, which contains the discussion matched queries. When a participant selects a title using the touch panel, slide thumbnails comprised in the selected discussion content are shown at the bottom of the slide list. Participants can preview the larger slide thumbnail at the top of the slide list.

The discussion segment view shows information about the discussion segments associated with the slide selected in the slide list. Examples of discussion segment information include structures of discussion segments, speaker’s ID, keywords of statement, and so on. In the discussion segment view, full text of the statement can also be previewed. Participants can browse videos in the video view displayed on the table from the start time of the selected statement in the discussion segment view.

Employing machine learning techniques

In this study, machine learning techniques are employed to obtain deep structures of presentation and discussion contents. Techniques like deep neural networks Bengio (2009) integrate several context information such as operation histories of users. By integrating the results of subject experiments on presentations and discussions, different methods to evaluate the quality of students’ intellectual activities and to increase their skills are discovered. The system tries to perform some consensus-building processes to make evaluation results appropriate for each student.

Digital poster presentation system

The digital poster presentation system consists of an authoring tool for digital posters, an interactive presentation system with digital posters, and an online sharing system for digital posters. Poster presentations can be considered as a close communication with the audience, and is also ideal for training in discussion not only for presentation. The digital poster presentation system makes the poster presentation easier. Tools such as PowerPoint slides can be integrated into the poster presentation. Additionally, the system will be extended for an interactive data acquisition. Hence, we believe that this system would significantly change the way of poster presentations.

Digital posters vs. regular posters

A digital poster is an interactive multimodal version of regular papers. The advantage of digital posters includes retrieval and reuse of contents. However, one of the biggest problems is portability since a digital poster needs a special hardware such as a digital poster panel and these devices cannot be carried elsewhere. Perhaps, in the near future, large and thin film-type screen devices, such as organic electro-luminescence displays, will be available and tools for digital posters will be commodities and easily acquired.

Authoring digital posters

Authoring of digital posters is very simple but some preparation is needed. The users should prepare resources such as images, videos, and slides in advance. We also developed an online resource management system for memos, images, videos, and slides. The digital poster authoring tool can import any resources submitted or shared in the resource management system.

The digital poster authoring tool shown in Figure 10 has three parts: the main menu, the resources menu, and the poster field.

Figure 10
figure 10

Main screen of the digital poster authoring tool.

The main menu provides the basic functionalities of the tool such as creating, opening, and saving of poster files, setting up the desired preferences, and choosing different creation modes. The digital authoring tool is also able to create both portrait and landscape orientation posters as needed.

The resources menu shown in Figure 11 lets the users add different types of blocks to the poster field. Each block automatically downloads a certain type of resource depending on the selected block from the online resource management system, except for the layout and text blocks. Selecting an image block will automatically scan for images in the resource management system while selecting a video block will automatically scan for videos in the resource management system. For the slide block, existing PowerPoint slides will be selected.

Figure 11
figure 11

Detailed view of the resources menu.

When the user taps a block in the resource menu, a list of thumbnail images is displayed in the window that appears from the right edge of the screen as shown in Figure 12. The user can easily arrange the layout of the poster using a layout block and interactively change a position of a block’s borderline. When the user wants to place any resource in the block, he/she should just drag and drop the thumbnail image from the resource list to the target block as shown in Figure 13.

Figure 12
figure 12

Image resource menu window.

Figure 13
figure 13

Image resource placement in the layout block.

Other resources, such as videos and slides, are inserted in the blocks in a similar way. An example of a created poster using the described authoring tool is shown in Figure 14. When the user finished editing the digital poster, the final poster can be stored in the online poster sharing system. It can be used for presentations by searching the digital poster at any time. During presentation time, the enlargement of images and the playback of videos and slides in the poster can be done.

Figure 14
figure 14

Example of a digital poster.

Data acquisition from interactions with digital posters

Digital posters are not only for a presenter to make a presentation but also for an audience to view in detail by interacting with the poster. Posters are unlike slides, where the complete content is summarized in one piece, which is more suitable to understand the content quickly. At the Leaders’ Saloon, visitors can easily retrieve and view the digital posters using the digital poster panel whenever they like. Interaction histories when visitors have interacted with the posters are recorded automatically. The number and time of poster views, views of the elements in the poster, and data such as browsing the order of the poster elements can be obtained by this system. These data are used to evaluate the posters and the skills of the poster author.

Skill evaluation methods

The focus of this study is the students of the new Graduate Leading Program at Nagoya University, which aims to nurture future global leaders. To achieve this goal, improving the communication skills of the students has to be addressed. In this study, we focus on developing the discussion and presentation skills of the students and this section describes in detail the evaluation method for the discussion and presentation skills of the students.

Discussion skill

Data acquired by the discussion mining system includes participant types (presenter, secretary, and others), number of start-up/follow-up statements of each participant, and scores of quality of each statement. The scores of quality are calculated by the agreement/disagreement data inputted by each participant’s discussion commander during discussions. For each statement, one point is added if someone agrees with it, one point is subtracted if someone disagrees, and then the score is determined. Results of the aggregate data of multiple students in three months are shown in Table 1.

Table 1 Discussion mining data results

The discussion skill of a student is evaluated using the score calculated by the following processes. First, the weight values for every behavior are determined. These weights are going to be rationally determined in the future using machine learning, but for now, the values were decided intuitively based on the difficulty of execution.

  • Number of participants: 3

  • Number of presentations: 10

  • Number of secretary acts: 5

  • Start-up statements except presenter’s cases: 3

  • Follow-up statements except presenter’s cases: 2

  • Quality (sum of agreement/disagreement values): 4

Let the score be the value of the sum after having applied such weight to the number of each behavior. Additionally, the evaluation of statement quality is also calculated. For the discussion skill score calculation, the presenter’s cases are excluded by start-up and follow-up statements of the students. This is because the situations when the presenters must answer the question from other participants occur naturally and they should not be treated the same way as cases in which participants of the discussion make remarks spontaneously.

The students can judge their status for this evaluation as reference, and can analyze their weak points. The student performance increases if the student makes many statements when he/she is not the presenter and many of the other participants also agree with these statements. It is then possible that these data be a basis to improve a student’s discussion skills. It can also be confirmed that the discussion skill is improved by making high quality statements, that is, a lot of agreements obtained from many participants.

Presentation skill

A study on developing oral presentations skills embedded oral presentations and assessment to their curriculum Kerby and Romine (2009). In their case study, they included at least one oral presentation in three of their courses and used a rubric to assess the oral presentations. Their results indicate that students better understood their weaknesses, strengths, and areas for improvement with their presentations. In our study, we also implemented the same design to improve the presentation skills of our students. We conducted two poster presentation sessions with two groups of students to evaluate their presentation skills. We used the poster presentation format instead of the regular oral presentations because of the interactivity of poster presentations. In poster presentations, the students are able to engage in conversation with people, giving them more opportunities to improve their communication skills. Also, poster presentations enliven the student presentations because students interact with each other more instead of just passively observing like in formal presentations.

Poster presentation session I

In this poster presentation session, twenty four (24) inexperienced students were divided into six (6) groups and each group was asked to create a digital poster using the authoring tool discussed in Section ‘Digital poster presentation system’. Each group presented their posters in the allotted time of fifteen (15) minutes while members of the other groups and spectators of the poster session evaluated each poster presentation. The evaluation sheet used for this poster presentation session is shown in Table 2.

Table 2 Poster presentation evaluation sheet I

Evaluators fill up the feedback form shown in Table 2 for each presentation. The evaluation criteria include Content, Organization, and Impact. The said criteria are based on the common themes in Brownlie’s 2007 bibliography Hess et al. (2009). For scoring results, the numerical values for the different ratings are as follows: Bad is 1, Poor is 2, Fair is 3, Good is 4, and Excellent is 5. The average scores of all the evaluators for all group presentations are shown in Table 3.

Table 3 Poster presentation I score results

In addition, the scores and standard deviations of all groups are shown in Figure 15. Since the standard deviations are not too large, this metric is not far off from human intuition for evaluation. Histories of interactions with digital posters are not analyzed yet. We are planning to combine human metrics and accumulated data such as access counts of posters and their internal elements in the near future.

Figure 15
figure 15

Scores and standard deviations of all groups.

However, there were major drawbacks in the evaluation sheet that we used for this session. First, we failed in evaluating the presentation delivery of the students. Second, the evaluators found it hard to judge a certain criteria based on the ratings. Thus, for the next poster presentation session, we made a number of changes in our evaluation sheet.

Poster presentation session II

In this poster presentation session, five (5) students were asked to create and present five digital posters using the authoring tool discussed in Section ‘Digital poster presentation system’. Spectators of the poster session were asked to evaluate each poster presentation.

We improved our evaluation criteria based on the encountered problems in the former trial. Based on feedback from the evaluators, one major drawback in the previous evaluation sheet shown in Table 2 is that there are not enough details for the ratings (Bad, Poor, Fair, Good, and Excellent) under a certain criteria. Thus, the evaluation sheet was modified to a rubric with concrete descriptions for each score and criteria. The new evaluation sheet is shown in Table 4. Using this rubric-style of evaluation, the evaluation criteria were clearer and evaluation time were faster for the spectators. Aside from changing the evaluation format, the sets of criteria were also modified. We added two main sets of criteria: impact and presentation. We added Impact to determine how the poster is able to attract the attention of spectators. It consists of the criteria for evaluating the poster’s title, overall appearance, and interest. We also added Presentation, another set of criteria for evaluating the students’ poster presentation skills. It consists of the ability to communicate properly to their audience and the ability to answer questions confidently. Adding these new sets of criteria provided a more effective and complete digital poster evaluation.

Table 4 Poster presentation evaluation sheet II

Using the new evaluation sheet, the score results of the second round of poster presentations are shown in Table 5. The evaluation scores of the professors (P) and students (S) were calculated. With these results, we were able to determine the weakness of each poster based on the criteria. For example, with Poster III, its content and organization needed a lot of improvement thus for feedback, the author needs to focus on these sets of criteria when he/she modifies the said poster.

Table 5 Poster presentation II score results

Future features of the new learning environment

The current training environment contains a 2D interactive system, such as touch panel discussion tables, digital poster panels, and an interactive wall-sized whiteboard, facilitating the interactions of users with the system. However, to further enhance the performance of the current learning environment, a vision system will be incorporated to increase the interaction dimension to 3D. The system will consist of a multi-camera system or Kinect that has a camera and range sensor device. Moreover, given an intelligent system that recognizes the users by robust face detection algorithm, user interaction will be smooth, and annotations will be automated and personalized, thereby creating a more advanced learning environment. An automated evaluation and facilitation of intellectual activities will also be applied to confirm whether the skills of the students improve, and whether their created contents obtain a higher evaluation than previous ones.

The current criteria does not evaluate body movement, gestures, posture, etc., which is common in evaluating presentations. However, it is difficult to evaluate these criteria and we will be incorporating evaluating these movements through the planned vision system. In order to improve the evaluation criteria presented in previous section, the automated system is expected to receive real-time evaluation from the audience and to provide the presenter with the relative score. The registered audience can input their score to an online sheet with a tablet while attending the poster session. The audio-visual system is also expected to record the visual and audio interactions between the presenter and each audience. The system will be able to match the provided online score by each spectator and his/her interaction with the presenter. We hope to understand the relation between the audio/visual information and the provided online score to train the system. Our eventual goal is to introduce a nearly automatic evaluation system that is regularly trained by the audience.

Conclusion

A novel physical-digital learning environment for discussion and presentation skills training has been developed at our university under the leading graduate program. By using state-of-the-art technologies, the selected students of the program will achieve an effective, interactive, and smooth discussion with the discussion mining system simultaneously summarizing and annotating the ongoing discussion. The discussion contents are available to the community or to the faculty for evaluation, feedback, and follow-up activities.

Moreover, the students of the program also have opportunities to improve their presentation skills with our digital poster presentation system. A presentation evaluation system has been adopted to our physical-digital learning environment that will be capable of evaluating each presentation and/or discussion based on audiences online feedback, recorded audio-visual data, and interaction with the facilities. The developed interactive presentation system has been initially evaluated and proven to be effective, not only for our novel physical-digital learning environment, but also for any other users equipped at least with an interactive display.

Evaluations were done to help the students in overcoming their weak points during their discussions and presentations. With this prototype environment, a new education system may emerge promoting an efficient and advanced learning.

Consent

Written informed consent was obtained from the students for the publication of this report and any accompanying images.

References

  • Armstrong, K, Big data: a revolution that will transform how we live, work, and think. Inf. Commun. Soc. 17(10), 1300–1302 2014.

  • Bengio, Y, Learning deep architectures for ai. Found. Trends Mach. Learn. 2(1), 1–127 2009.

  • Chiu, P, J Boreczky, A Girgensohn, D Kimber, in Proceedings of the Tenth World Wide Web Conference. WWW10. LiteMinutes: An Internet-based system for multimedia meeting minutes (ACMNew York, NY, USA, 2001), pp. 140–149.

  • Conklin, J, ML Begeman, gibis: A hypertext tool for exploratory policy discussion. ACM Trans. Inf. Syst. 6(4), 303–331 1988.

  • Cutrone, L, M Chang, Kinshuk, in Technology for Education (T4E), 2011 IEEE International Conference On. Auto-assessor: Computerized assessment system for marking student’s short-answers automatically (IEEE Computer Society, Los AlamitosCA, USA, 2011), pp. 81–88 isbn = 978-0-7695-4534-9.

  • Hess, GR, KW Tosney, LH Liegel, Creating effective poster presentations: Amee guide no. 40. Med. Teach. 31(4), 319–321 2009.

  • Kerby, D, J Romine, Develop oral presentation skills through accounting curriculum design and course-embedded assessment. J. Educ. Bus. 85(3), 172–179 2009.

  • Kurihara, K, M Goto, J Ogata, Y Matsusaka, T Igarashi, in Proceedings of the 9th International Conference on Multimodal Interfaces. ICMI ’07. Presentation sensei: a presentation training system using speech and image processing (ACMNew York, NY, USA, 2007), pp. 358–365.

  • Lee, D-S, B Erol, J Graham, JJ Hull, N Murata, in Proceedings of the Tenth ACM International Conference on Multimedia. MULTIMEDIA ’02. Portable meeting recorder (ACMNew York, NY, USA, 2002), pp. 493–502.

  • Nagao, K, K Kaji, D Yamamoto, H Tomobe, in Advances in Multimedia Information Processing - PCM 2004. Lecture Notes in Computer Science, vol. 3331, ed. by K Aizawa, Y Nakamura, and S Satoh. Discussion mining: Annotation-based knowledge discovery from real world activities (Springer Berlin Heidelberg, 2005), pp. 522–531. http://dx.doi.org/10.1007/978-3-540-30541-5_64.

  • Ohira, S, K Kawanishi, K Nagao, in Proceedings of the Second International Conference on Technological Ecosystems for Enhancing Multiculturality. TEEM ’14. Assessing motivation and capacity to argue in a gamified seminar setting (ACMNew York, NY, USA, 2014), pp. 197–204.

  • Omori, Y, K Ito, S Nishida, T Kihira, in Systems, Man and Cybernetics, 2006. SMC ’06. IEEE International Conference On, vol. 3. Study on supporting group discussions by improving discussion skills with expost evaluation (IEEE, 2006), pp. 2191–2196.

  • Sellen, AJ, S Whittaker, Beyond total capture: a constructive critique of lifelogging. Commun. ACM. 53(5), 70–77 2010.

  • Trinh, H, K Yatani, D Edge, in Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems. CHI ’14. Pitchperfect: Integrated rehearsal environment for structured presentation preparation (ACMNew York, NY, USA, 2014), pp. 1571–1580.

  • Wanas, N, M El-Saban, H Ashour, W Ammar, in Proceedings of the 2Nd ACM Workshop on Information Credibility on the Web. WICOW ’08. Automatic scoring of online discussion posts (ACMNew York, NY, USA, 2008), pp. 19–26.

Download references

Acknowledgments

This work is supported by the Real-World Data Circulation Leaders’ Graduate Program of Nagoya University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Katashi Nagao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

KN, MPT, and JTF conducted research activities on development of an advanced physical-digital learning environment that can train students to enhance their discussion and presentation skills, and drafted the results in the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nagao, K., Tehrani, M.P. & B Fajardo, J.T. Tools and evaluation methods for discussion and presentation skills training. Smart Learn. Environ. 2, 5 (2015). https://doi.org/10.1186/s40561-015-0011-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40561-015-0011-1

Keywords