This site is from a past semester! The current version will be here when the new semester starts.
CS2113/T 2020 Jan-Apr
  • Full Timeline
  • Week 1 [from Mon Jan 13]
  • Week 2 [from Wed Jan 15 noon]
  • Week 3 [from Wed Jan 22 noon]
  • Week 4 [from Wed Jan 29 noon]
  • Week 5 [from Wed Feb 5 noon]
  • Week 6 [from Wed Feb 12 noon]
  • Week 7 [from Wed Feb 19 noon]
  • Week 8 [from Wed Mar 4 noon]
  • Week 9 [from Wed Mar 11 noon]
  • Week 10 [from Wed Mar 18 noon]
  • Week 11 [from Wed Mar 25 noon]
  • Week 12 [from Wed Apr 1 noon]
  • Week 13 [from Wed Apr 8 noon]
  • Textbook
  • Admin Info
  • Report Bugs
  • Forum
  • Instructors
  • Announcements
  • File Submissions
  • Tutorial Schedule
  • repl.it link
  • Java Coding Standard
  • Forum Activities Dashboard
  • Participation Dashboard

  •  Individual Project (iP):
  • Individual Project Info
  • Duke Upstream Repo
  • iP Code Dashboard
  • iP Progress Dashboard

  •  Team Project (tP):
  • Team Project Info
  • Team List
  • tP Code Dashboard
  • tP Progress Dashboard
  • Week 5 [from Wed Feb 5 noon] - Admin Info

    1. Submit coding exercises on repl.it
    2. Practice peer evaluation on TEAMMATES by Sunday (Feb 16th)

    1 Submit coding exercises on repl.it

    • As before, submit the coding exercises allocated for the current week, and any pending exercises from previous weeks.

    2 Practice peer evaluation on TEAMMATES by Sunday (Feb 16th)

    • You should receive the submission link by Monday noon. Email cs2113@comp.nus.edu.sg if you did not receive the submission link on time.

    This module leverages peer feedback/evaluations in many ways. In particular, we do several rounds of peer evaluations using TEAMMATES.

    Tool Used: TEAMMATES (for Peer Evaluations/Feedback)

    We use the TEAMMATES online peer evaluation system. TEAMMATES is a project run by NUS SoC students and used by over 0.5 million users from over 1000 universities.

    Preparation: When the first feedback session is opne on TEAMMATES, you will receive an eamil from TEAMMATES. There is nothing for you to do until then.

    When you do receive that email, TEAMMATES will allow you to access it without using a Google login. However, we encourage (but not require) you to login to TEAMMATES using your Google account and complete your profile with a suitable profile photo. Reason: CS2113/T is a big class. This profile helps us to remember you better, even after the module is over.

     

    The purpose of the profile photo is for the teaching team to identify you. Therefore, choose a recent individual photo showing your face clearly (i.e., not too small) -- somewhat similar to a passport photo. Some examples can be seen in the 'Teaching team' page. Given below are some examples of good and bad profile photos.

    If you are uncomfortable posting your photo due to security reasons, you can post a lower resolution image so that it is hard for someone to misuse that image for fraudulent purposes. If you are concerned about privacy, you may use a placeholder image in place of the photo in module-related documents that are publicly visible.

    Submitting peer evaluations is compulsory. If you routinely miss submitting peer evaluations, you can lose participation marks.

    Session: Practice Peer Evaluation

    • Objective: to give you a chance to familiarize with the TEAMMATES tool
    • Held early in the semester
    • Submission is compulsory. However, your responses will not considered for grading as this session is for practice only.

    Policy on deadline extensions

    Learning to honor deadlines is a learning outcome of this module. Therefore, we do not normally extend module deadlines to accommodate those who missed the deadline, unless there are some extraordinary circumstances.

    + Other info relevant to this week:

    Admin tP: Grading

    Note that project grading is not competitive (not bell curved). CS2113T projects will be assessed separately from CS2113 projects. Given below is the marking scheme.

    Total: 35 55 marks ( 25 45 individual marks + 10 team marks)

    See the sections below for details of how we assess each aspect.

    1. Project Grading: Product Design [/ 5 marks]

    Evaluates: how well your features fit together to form a cohesive product (not how many features or how big the features are) and how well does it match the target user

    Evaluated by:

    • tutors (based on product demo and user guide)
    • peers from other teams (based on peer testing and user guide)

    Q Quality of the product design,
    Evaluate based on the User Guide and the actual product behavior.

    Criterion Unable to judge Low Medium High
    target user not specified clearly specified and narrowed down appropriately
    value proposition not specified The value to target user is low. App is not worth using Some small group of target users might find the app worth using Most of the target users are likely to find the app worth using
    optimized for target user Not enough focus for CLI users Mostly CLI-based, but cumbersome to use most of the time feels like a fast typist can be more productive with the app, compared to an equivalent GUI app without a CLI

    In addition, feature flaws reported in the PE will be considered when grading this aspect.

    These are considered feature flaws:
    The feature does not solve the stated problem of the intended user i.e., the feature is 'incomplete'
    Hard-to-test features
    Features that don't fit well with the product
    Features that are not optimized enough for fast-typists or target users

    Note that 'product design' or 'functionality' are not critical learning outcomes of the tP. Therefore, the bar you need to reach to get full 5 marks will be quite low. For example, the Medium level in the rubric given in the panel above should be enough to achieve full marks. Similarly, only cases of excessive 'feature flaw' bugs will affect the score.

    2. Project Grading: Implementation [ 10 20 marks]

    2A. Code quality

    Evaluates: the quality of the parts of the code you claim as written by you

    Evaluation method: manual inspection by tutors + automated-analysis by a script

    Criteria:

    • At least some evidence of these (see here for more info)

      • logging
      • exceptions
      • assertions
    • No coding standard violations e.g. all boolean variables/methods sounds like booleans.

    • SLAP is applied at a reasonable level. Long methods or deeply-nested code are symptoms of low-SLAP.

    • No noticeable code duplications i.e. if there multiple blocks of code that vary only in minor ways, try to extract out similarities into one place, especially in test code.

    • Evidence of applying code quality guidelines covered in the module.

    2B. Effort

    Evaluates: how much value you contributed to the product

    Method:

    • This is evaluated by peers who tested your product, and tutors.

    Q [For each member] The functional code contributed by the person is,
    Consider implementation work only (i.e., exclude testing, documentation, project management etc.)
    The typical iP refers to an iP where all the requirements are met at the minimal expectations given.
    Use the person's PPP and RepoSense page to evaluate the effort.

    • The score could be further moderated by this question answered by team members.

    Q The team members' contribution to the product implementation (excluding UG, DG, and team-based tasks) is,

    3. Project Grading: QA [ 10 15 marks]

    3A. Developer Testing:

    Evaluates: How well you tested your own feature

    Based on:

    1. functionality bugs in your work found by others during the Practical Exam (PE)
    2. your test code (note our expectations for automated testing)
     
    • Expectation Write some automated tests so that we can evaluate your ability to write tests.

    🤔 How much testings is enough? We expect you to decide. You learned different types of testing and what they try to achieve. Based on that, you should decide how much of each type is required. Similarly, you can decide to what extent you want to automate tests, depending on the benefits and the effort required.
    There is no requirement for a minimum coverage level. Note that in a production environment you are often required to have at least 90% of the code covered by tests. In this project, it can be less. The weaker your tests are, the higher the risk of bugs, which will cost marks if not fixed before the final submission.

    These are considered functionality bugs:
    Behavior differs from the User Guide
    A legitimate user behavior is not handled e.g. incorrect commands, extra parameters
    Behavior is not specified and differs from normal expectations e.g. error message does not match the error

    3B. System/Acceptance Testing:

    Evaluates: How well you can system-test/acceptance-test a product

    Based on: bugs you found in the PE. In addition to functionality bugs, you get credit for reporting documentation bugs and feature flaws.

    Grading bugs found in the PE
    • Of Developer Testing component, based on the bugs found in your code3A and System/Acceptance Testing component, based on the bugs found in others' code3B above, the one you do better will be given a 70% weight and the other a 30% weight so that your total score is driven by your strengths rather than weaknesses.
    • Bugs rejected by the dev team, if the rejection is approved by the teaching team, will not affect marks of the tester or the developer.
    • The penalty/credit for a bug varies based on,
      • The severity of the bug: severity.High > severity.Medium > severity.Low > severity.VeryLow
      • The type of the bug: type.FunctionalityBug > type.DocumentationBug > type.FeatureFlaw
    • The penalty for a bug is divided equally among assignees.
    • Developers are not penalized for duplicate bug reports they received but the testers earn credit for duplicate bug reports they submitted as long as the duplicates are not submitted by the same tester.
    • i.e., the same bug reported by many testersObvious bugs earn less credit for the tester and slightly more penalty for the developer.
    • If the team you tested has a low bug count i.e., total bugs found by all testers is low, we will fall back on other means (e.g., performance in PE dry run) to calculate your marks for system/acceptance testing.
    • Your marks for developer testing depends on the bug density rather than total bug count. Here's an example:
      • n bugs found in your feature; it is a difficult feature consisting of lot of code → 4/5 marks
      • n bugs found in your feature; it is a small feature with a small amount of code → 1/5 marks
    • You don't need to find all bugs in the product to get full marks. For example, finding half of the bugs of that product or 4 bugs, whichever the lower, could earn you full marks.
    • Excessive incorrect downgrading/rejecting/marking as duplicatesduplicate-flagging, if deemed an attempt to game the system, will be penalized.

    4. Project Grading: Documentation [ 5 10 marks]

    Evaluates: your contribution to project documents

    Method: Evaluated in two steps.

    • Step 1: Evaluate the whole UG and DG. This is evaluated by peers who tested your product, and tutors.

    Q Compared to AddressBoook-Level3 (AB3), the overall quality of the UG you evaluated is,
    Evaluate based on fit-for-purpose, from the perspective of a target user. For reference, the AB3 UG is here.

    Q Compared to AB3, the overall quality of the DG you evaluated is,
    Evaluate based on fit-for-purpose from the perspective of a new team member trying to understand the product's internal design by reading the DG. For reference, the AB3 DG is here.

    • Step 2: Evaluate how much of that effort can be attributed to you. This is evaluated by team members, and tutors.

    Q The team members' contribution to the User Guide is,

    Q The team members' contribution to the Developer Guide is,

    • In addition, UG and DG bugs you received in the PE will be considered for grading this component.

    These are considered UG bugs (if they hinder the reader):

    Use of visuals

    • Not enough visuals e.g., screenshots/diagrams
    • The visuals are not well integrated to the explanation
    • The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes

    Use of examples:

    • Not enough or too many examples e.g., sample inputs/outputs

    Explanations:

    • The explanation is too brief or unnecessarily long.
    • The information is hard to understand for the target audience. e.g., using terms the reader might not know

    Neatness/Correctness:

    • looks messy
    • not well-formatted
    • broken links, other inaccuracies, typos, etc.

    These are considered DG bugs (if they hinder the reader):

    These are considered UG bugs (if they hinder the reader):

    Use of visuals

    • Not enough visuals e.g., screenshots/diagrams
    • The visuals are not well integrated to the explanation
    • The visuals are unnecessarily repetitive e.g., same visual repeated with minor changes

    Use of examples:

    • Not enough or too many examples e.g., sample inputs/outputs

    Explanations:

    • The explanation is too brief or unnecessarily long.
    • The information is hard to understand for the target audience. e.g., using terms the reader might not know

    Neatness/Correctness:

    • looks messy
    • not well-formatted
    • broken links, other inaccuracies, typos, etc.

    UML diagrams:

    • Notation incorrect or not compliant with the notation covered in the module.
    • Some other type of diagram used when a UML diagram would have worked just as well.
    • The diagram used is not suitable for the purpose it is used.
    • The diagram is too complicated.

    Code snippets:

    • Excessive use of code e.g., a large chunk of code is cited when a smaller extract of would have sufficed.

    Problems in User Stories. Examples:

    • Incorrect format
    • All three parts are not present
    • Benefit does not match the function
    • Important user stories missing

    Problems in NFRs. Examples:

    • Not really a Non-Functional Requirement
    • Not well-defined (i.e., hard to decide when it has been met)
    • Not reasonably achievable
    • Highly relevant NFRs missing

    Problems in Glossary. Examples:

    • Unnecessary terms included
    • Important terms missing

    5. Project Grading: Project Management [/ = 5 marks]

    5A. Process:

    Evaluates: How well you did in project management related aspects of the project, as an individual and as a team

    Based on: tutor/bot observations of project milestones and GitHub data

    Grading criteria:

    • No e.g., the product is not working at all by the milestone deadlinemajor mishaps at v1.0 and v2.0.
    • Good attempt to use of at least some Git and GitHub features (e.g., milestones, releases, issue tracker, PRs)
    • Project done iteratively and incrementally (opposite: doing most of the work in one big burst)

    5B. Team-tasks:

    Evaluates: How much you contributed to team-tasks

    Here is a non-exhaustive list of team-tasks:

    1. Necessary general code enhancements
    2. Setting up tools e.g., GitHub, Gradle
    3. Maintaining the issue tracker
    4. Release management
    5. Updating user/developer docs that are not specific to a feature e.g. documenting the target user profile
    6. Incorporating more useful tools/libraries/frameworks into the product or the project workflow (e.g. automate more aspects of the project workflow using a GitHub plugin)

    Based on: peer evaluations, tutor observations

    Grading criteria: To earn full marks,

    • you have done close to a fair share of the team tasks. You can earn bonus marks by doing more than your fair share.
    • you have merged code in at least four of weeks 7, 8, 9, 10, 11, 12

    Admin Apdx B (Policies) → Policy on project work distribution

    Policy on project work distribution

    As most of the work is graded individually, it is OK to do less or more than equal share in your project team.

    Individual Expectations

    Individual Expectations on Implementation

    • Expectation Contribute to the functional code of the product.

      • User-visible features are preferred, but it is not a strict requirement.:
      • The enhancement(s) should fit with the rest of the software (and the target user profile) and should have the consent of the team members. You will lose marks if you go 'rogue' and add things that don't fit with the product.
    • Tip: Contribute to all aspects of the project e.g. write backend code, frontend code, test code, user documentation, and developer documentation. Reason: If you limit yourself to certain aspects only, you could lose marks allocated for the aspects you did not do. In addition, the final exam assumes that you are familiar with all aspects of the project.

    • Tip: Do all the work related to your enhancement yourself. Reason:If there is no clear division of who did which enhancement, it will be difficult to divide project credit (or assign responsibility for bugs detected by testers) later.

    Individual Expectations on Documentation

    • Objective: showcase your ability to write both user-facing documentation and developer-facing documentation.
    • Expectation Update the User Guide (UG) and the Developer Guide (DG) parts that are related to the enhancements you added. The minimum requirement is given below. (Reason: Evaluators will not be able to give you marks unless there is sufficient evidence of your documentation skills.)
      • UG: at least 1 page
      • DG: at least 1 page
    • Tip: If the UG/DG updates for your enhancements are not enough to reach the above requirements, you can make up the shortfall by documenting 'proposed' features and alternative designs/implementations.
    • Expectation Use at least some of the UML diagrams in your DG updates i.e., diagrams you added yourself or those you modified significantly.

    Individual Expectations on Testing

    • Expectation Write some automated tests so that we can evaluate your ability to write tests.

    🤔 How much testings is enough? We expect you to decide. You learned different types of testing and what they try to achieve. Based on that, you should decide how much of each type is required. Similarly, you can decide to what extent you want to automate tests, depending on the benefits and the effort required.
    There is no requirement for a minimum coverage level. Note that in a production environment you are often required to have at least 90% of the code covered by tests. In this project, it can be less. The weaker your tests are, the higher the risk of bugs, which will cost marks if not fixed before the final submission.

    Individual Expectations on Teamwork

    • Expectation Do a fair share of the team-tasks.

    Team-tasks are the tasks that someone in the team has to do. Marks allocated to team-tasks will be divided among team members based on how much each member contributed to those tasks.

    Here is a non-exhaustive list of team-tasks:

    1. Necessary general code enhancements
    2. Setting up tools e.g., GitHub, Gradle
    3. Maintaining the issue tracker
    4. Release management
    5. Updating user/developer docs that are not specific to a feature e.g. documenting the target user profile
    6. Incorporating more useful tools/libraries/frameworks into the product or the project workflow (e.g. automate more aspects of the project workflow using a GitHub plugin)

    • Expectation Assume a fair share of project roles and responsibilities.

    Roles indicate aspects you are in charge of and responsible for. E.g., if you are in charge of documentation, you are the person who should allocate which parts of the documentation is to be done by who, ensure the document is in right format, ensure consistency etc.

    This is a non-exhaustive list; you may define additional roles.

    • Team lead: Responsible for overall project coordination.
    • Documentation (short for ‘in charge of documentation’): Responsible for the quality of various project documents.
    • Testing: Ensures the testing of the project is done properly and on time.
    • Code quality: Looks after code quality, ensures adherence to coding standards, etc.
    • Deliverables and deadlines: Ensure project deliverables are done on time and in the right format.
    • Integration: In charge of versioning of the code, maintaining the code repository, integrating various parts of the software to create a whole.
    • Scheduling and tracking: In charge of defining, assigning, and tracking project tasks.
    • [Tool ABC] expert: e.g. Intellij expert, Git expert, etc. Helps other team member with matters related to the specific tool.
    • In charge of[Area XYZ] of the code: e.g. In charge of the code that deals with storage, etc. If you are in charge of an area, you are expected to know that area well, and review changes done to that code.

    Ensure each of the important roles are assigned to one person in the team. It is OK to have a 'backup' for each role, but for each aspect there should be one person who is unequivocally the person responsible for it. Reason: when everyone is responsible for everything, no one is.

    • Expectation Review each others work. Reason: reviewing skills is a learning outcome, and it is mutually beneficial.

    Admin Apdx B (Policies) → Policy on email response time

    Policy on email response time

    Normally, the prof will respond within 24 hours if it was an email sent to the prof or a forum post directed at the prof. If you don't get a response within that time, please feel free to remind the prof. It is likely that the prof did not notice your post or the email got stuck somewhere.

    Similarly we expect you to check email regularly and respond to emails written to you personally (not mass email) promptly.

    Not responding to a personal email is a major breach of professional etiquette (and general civility). Imagine how pissed off you would be if you met the prof along the corridor, said 'Hi prof, good morning' and the prof walked away without saying anything back. Not responding to a personal email is just as bad. Always take a few seconds to at least acknowledge such emails.  It doesn't take long to type "Noted. Thanks" and hit 'send'.

    The promptness of a reply is even more important when the email is requesting you for something that you cannot provide. Imagine you wrote to the prof requesting a reference letter and the prof did not respond at all because he/she did not want to give you one; You'll be quite frustrated because you wouldn't know whether to look for another prof or wait longer for a response. Saying 'No' is fine and in fact a necessary part of professional life; but saying nothing is not acceptable. If you didn't reply, the sender will not even know whether you received the email.

    Admin Apdx C (FAQs) → Why so much bean counting? : OPTIONAL

    Why so much bean counting? : OPTIONAL

    Sometimes, small things matter in big ways. e.g., all other things being equal, a job may be offered to the candidate who has the neater looking CV although both have the same qualifications. This may be unfair, but that's how the world works. Students forget this harsh reality when they are in the protected environment of the school and tend to get sloppy with their work habits. That is why we reward all positive behavior, even small ones (e.g., following precise submission instructions, arriving on time etc.).

    But unlike the real world, we are forgiving. That is why you can still earn full marks for participation even if you miss a few things here and there.

    Related article: This Is The Personality Trait That Most Often Predicts Success (this is why we reward things like punctuality).